diff --git a/.changelog/9110.txt b/.changelog/9110.txt new file mode 100644 index 00000000000..0f083bcf19d --- /dev/null +++ b/.changelog/9110.txt @@ -0,0 +1,171 @@ +```release-note:breaking-change +bigquery: added more input validations for BigQuery table schema +``` +```release-note:breaking-change +firebase: changed `deletion_policy` default to `DELETE` for `google_firebase_web_app`. +``` +```release-note:breaking-change +cloudrunv2: Removed deprecated fields `startup_probe` and `liveness_probe` from `google_cloud_run_v2_job` resource. +``` +```release-note:breaking-change +cloudrunv2: Removed deprecated field `liveness_probe.tcp_socket` from `google_cloud_run_v2_service` resource. +``` +```release-note:bug +bigquery: fixed view and materialized view creation when schema is specified +``` +```release-note:bug +compute: used APIs default value for field `enable_endpoint_independent_mapping` in resource `google_compute_router_nat` +``` +```release-note:breaking-change +dataplex: removed `data_profile_result` and `data_quality_result` from `google_dataplex_scan` +``` +```release-note:breaking-change +bigquery: made `routine_type` required for `google_bigquery_routine` +``` +```release-note:bug +compute: added default value to `metric.filter` in the resource `google_compute_autoscaler` (beta) +``` +```release-note:deprecation +privateca: removed deprecated fields `configValues`, `pemCertificates` +``` +```release-note:breaking-change +gameservices: Remove Terraform support for `gameservices` +``` +```release-note:bug +sql: fixed diffs when re-ordering existing `database_flags` +``` +```release-note:breaking-change +logging: made `growth_factor`, `num_finite_buckets`, and `scale` required for `google_logging_metric` +``` +```release-note:breaking-change +compute: removed default value for `rule.rate_limit_options.encorce_on_key` on resource `google_compute_security_policy` +``` +```release-note:note +provider: some provider default values are now shown at plan-time +``` +```release-note:deprecation +cloudiot: deprecated resource `google_cloudiot_registry` +``` +```release-note:deprecation +cloudiot: deprecated resource `google_cloudiot_device` +``` +```release-note:deprecation +cloudiot: deprecated resource `google_cloudiot_registry_iam_*` +``` +```release-note:deprecation +cloudiot: deprecated datasource `google_cloudiot_registry_iam_policy` +``` +```release-note:enhancement +provider: added provider default labels +``` +```release-note:breaking-change +logging: changed the default value of `unique_writer_identity` from `false` to `true` in `google_logging_project_sink`. +``` +```release-note:breaking-change +accesscontextmanager: changed multiple array fields to sets where appropriate to prevent duplicates and fix diffs caused by server side reordering. +``` +```release-note:breaking-change +servicenetworking: used Create instead of Patch to create `google_service_networking_connection` +``` +```release-note:breaking-change +firebase: removed `google_firebase_project_location` +``` +```release-note:breaking-change +provider: data sources now return errors on 404s when applicable instead of silently failing +``` +```release-note:breaking-change +cloudfunction2: made `location` required on `google_cloudfunctions2_function` +``` +```release-note:breaking-change +cloudrunv2: transitioned `volumes.cloud_sql_instance.instances` to SET from ARRAY for `google_cloud_run_v2_service` +``` +```release-note:breaking-change +secretmanager: removed `automatic` field in `google_secret_manager_secret` resource +``` +```release-note:breaking-change +container: removed `enable_binary_authorization` in `google_container_cluster` +``` +```release-note:breaking-change +container: removed the behaviour that `google_container_cluster` will delete the cluster if it's created in an error state. Instead, it will mark the cluster as tainted, allowing manual inspection and intervention. To proceed with deletion, run another `terraform apply`. +``` +```release-note:bug +compute: removed the default value for field `reconcile_connections ` in resource `google_compute_service_attachment`, the field will now default to a value returned by the API when not set in configuration +``` +```release-note:breaking-change +container: removed default value in `network_policy.provider` in `google_container_cluster` +``` +```release-note:breaking-change +container: removed default for `logging_variant` in `google_container_node_pool` +``` +```release-note:breaking-change +container: changed `management.auto_repair` and `management.auto_upgrade` defaults to true in `google_container_node_pool` +``` +```release-note:breaking-change +servicenetworking: used the `deleteConnection` method to delete the resource `google_service_networking_connection` +``` +```release-note:bug +provider: fixed a bug where labels/annotations field not exists in GA for some resources +``` +```release-note:breaking-change +provider: Empty strings in the provider configuration block will no longer be ignored when configuring the provider +``` +```release-note:breaking-change +looker: removed `LOOKER_MODELER` as a possible value in `google_looker_instance. platform_edition` +``` +```release-note:breaking-change +container: reworked the `taint` field in `google_container_cluster` and `google_container_node_pool` to only manage a subset of taint keys based on those already in state. Most existing resources are unaffected, unless they use `sandbox_config`- see upgrade guide for details. +``` +```release-note:enhancement +container: added the `effective_taints` attribute to `google_container_cluster` and `google_container_node_pool`, outputting all known taint values +``` +```release-note:bug +`dataflow`: fixed permadiff when SdkPipeline values are supplied via parameters. +``` +```release-note:bug +`dataflow`: fixed max_workers read value permanently displaying as 0. +``` +```release-note:bug +`dataflow`: fixed issue causing error message when max_workers and num_workers were supplied via parameters. +``` +```release-note:breaking-change +provider: added provider-level validation so these fields are not set as empty strings in a user's config: `credentials`, `access_token`, `impersonate_service_account`, `project`, `billing_project`, `region`, `zone` +``` +```release-note:breaking-change +provider: fixed many import functions throughout the provider that matched a subset of the provided input when possible. Now, the GCP resource id supplied to "terraform import" must match exactly. +``` +```release-note:breaking-change +compute: retyped `consumer_accept_lists` to a SET from an ARRAY type for `google_compute_service_attachment +``` +```release-note:breaking-change +monitoring: made `labels` immutable in `google_monitoring_metric_descriptor` +``` +```release-note:bug +monitoring: fixed an issue where `metadata` was not able to be updated in `google_monitoring_metric_descriptor` +``` +```release-note:breaking-change +firebase: made `google_firebase_rules.release` immutable +``` +```release-note:enhancement +containeraws: added `binary_authorization` to `google_container_aws_cluster` +``` +```release-note:enhancement +containeraws: added `update_settings` to `google_container_aws_node_pool` +``` +```release-note:breaking-change +compute: `size` in `google_compute_node_group` is now an output only field. +``` +```release-note:enhancement +compute: `google_compute_node_group` made mutable +``` +```release-note:note +compute: `google_compute_node_group` made to require one of `initial_size` or `autoscaling_policy` fields configured upon resource creation +``` +```release-note:enhancement +baremetal: make delete a noop for the resource `google_bare_metal_admin_cluster` to better align with actual behavior +``` +```release-note:breaking-change +container: `google_container_cluster` now has `deletion_protection` enabled to `true` by default. When enabled, this field prevents Terraform from deleting the resource. +``` +```release-note:breaking-change +monitoring: fixed perma-diffs in `google_monitoring_dashboard.dashboard_json` by suppressing values returned by the API that are not in configuration +``` diff --git a/.teamcity/components/generated/services.kt b/.teamcity/components/generated/services.kt index 03b7dc079b3..db33ded23cd 100644 --- a/.teamcity/components/generated/services.kt +++ b/.teamcity/components/generated/services.kt @@ -156,11 +156,6 @@ var services = mapOf( "displayName" to "Cloudids", "path" to "./google/services/cloudids" ), - "cloudiot" to mapOf( - "name" to "cloudiot", - "displayName" to "Cloudiot", - "path" to "./google/services/cloudiot" - ), "cloudrun" to mapOf( "name" to "cloudrun", "displayName" to "Cloudrun", @@ -361,11 +356,6 @@ var services = mapOf( "displayName" to "Firestore", "path" to "./google/services/firestore" ), - "gameservices" to mapOf( - "name" to "gameservices", - "displayName" to "Gameservices", - "path" to "./google/services/gameservices" - ), "gkebackup" to mapOf( "name" to "gkebackup", "displayName" to "Gkebackup", diff --git a/go.mod b/go.mod index 2b06bd9d9a7..e7477b4a5f6 100644 --- a/go.mod +++ b/go.mod @@ -3,7 +3,7 @@ go 1.19 require ( cloud.google.com/go/bigtable v1.19.0 - github.com/GoogleCloudPlatform/declarative-resource-client-library v1.51.0 + github.com/GoogleCloudPlatform/declarative-resource-client-library v1.52.0 github.com/apparentlymart/go-cidr v1.1.0 github.com/davecgh/go-spew v1.1.1 github.com/dnaeon/go-vcr v1.0.1 diff --git a/go.sum b/go.sum index dab59d4c04e..69175dcd310 100644 --- a/go.sum +++ b/go.sum @@ -17,6 +17,8 @@ cloud.google.com/go/longrunning v0.5.1/go.mod h1:spvimkwdz6SPWKEt/XBij79E9fiTkHS github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/GoogleCloudPlatform/declarative-resource-client-library v1.51.0 h1:YhWTPhOf6gVpA9mSfnLOuL8Y6j8W5pzmHE7flXjTke4= github.com/GoogleCloudPlatform/declarative-resource-client-library v1.51.0/go.mod h1:pL2Qt5HT+x6xrTd806oMiM3awW6kNIXB/iiuClz6m6k= +github.com/GoogleCloudPlatform/declarative-resource-client-library v1.52.0 h1:KswxXF4E5iWv2ggktqv265zOvwmXA3mgma3UQfYA4tU= +github.com/GoogleCloudPlatform/declarative-resource-client-library v1.52.0/go.mod h1:pL2Qt5HT+x6xrTd806oMiM3awW6kNIXB/iiuClz6m6k= github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= github.com/Microsoft/go-winio v0.4.16 h1:FtSW/jqD+l4ba5iPBj9CODVtgfYAD8w2wS923g/cFDk= github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0= @@ -428,5 +430,3 @@ gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= rsc.io/binaryregexp v0.2.0 h1:HfqmD5MEmC0zvwBuF187nq9mdnXjXsSivRiXN7SmRkE= -github.com/GoogleCloudPlatform/declarative-resource-client-library v1.51.0 h1:YhWTPhOf6gVpA9mSfnLOuL8Y6j8W5pzmHE7flXjTke4= -github.com/GoogleCloudPlatform/declarative-resource-client-library v1.51.0/go.mod h1:pL2Qt5HT+x6xrTd806oMiM3awW6kNIXB/iiuClz6m6k= diff --git a/google/acctest/bootstrap_test_utils.go b/google/acctest/bootstrap_test_utils.go index e3d7e763e25..e5b375e379f 100644 --- a/google/acctest/bootstrap_test_utils.go +++ b/google/acctest/bootstrap_test_utils.go @@ -15,6 +15,7 @@ import ( tpgcompute "github.com/hashicorp/terraform-provider-google/google/services/compute" "github.com/hashicorp/terraform-provider-google/google/services/privateca" "github.com/hashicorp/terraform-provider-google/google/services/resourcemanager" + tpgservicenetworking "github.com/hashicorp/terraform-provider-google/google/services/servicenetworking" "github.com/hashicorp/terraform-provider-google/google/services/sql" "github.com/hashicorp/terraform-provider-google/google/tpgiamresource" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -25,6 +26,7 @@ import ( cloudresourcemanager "google.golang.org/api/cloudresourcemanager/v1" iam "google.golang.org/api/iam/v1" "google.golang.org/api/iamcredentials/v1" + "google.golang.org/api/servicenetworking/v1" "google.golang.org/api/serviceusage/v1" sqladmin "google.golang.org/api/sqladmin/v1beta4" ) @@ -356,6 +358,135 @@ func BootstrapSharedTestNetwork(t *testing.T, testId string) string { return network.Name } +const SharedTestGlobalAddressPrefix = "tf-bootstrap-addr-" + +func BootstrapSharedTestGlobalAddress(t *testing.T, testId, networkId string) string { + project := envvar.GetTestProjectFromEnv() + addressName := SharedTestGlobalAddressPrefix + testId + + config := BootstrapConfig(t) + if config == nil { + return "" + } + + log.Printf("[DEBUG] Getting shared test global address %q", addressName) + _, err := config.NewComputeClient(config.UserAgent).GlobalAddresses.Get(project, addressName).Do() + if err != nil && transport_tpg.IsGoogleApiErrorWithCode(err, 404) { + log.Printf("[DEBUG] Global address %q not found, bootstrapping", addressName) + url := fmt.Sprintf("%sprojects/%s/global/addresses", config.ComputeBasePath, project) + netObj := map[string]interface{}{ + "name": addressName, + "address_type": "INTERNAL", + "purpose": "VPC_PEERING", + "prefix_length": 16, + "network": networkId, + } + + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "POST", + Project: project, + RawURL: url, + UserAgent: config.UserAgent, + Body: netObj, + Timeout: 4 * time.Minute, + }) + if err != nil { + t.Fatalf("Error bootstrapping shared test global address %q: %s", addressName, err) + } + + log.Printf("[DEBUG] Waiting for global address creation to finish") + err = tpgcompute.ComputeOperationWaitTime(config, res, project, "Error bootstrapping shared test global address", config.UserAgent, 4*time.Minute) + if err != nil { + t.Fatalf("Error bootstrapping shared test global address %q: %s", addressName, err) + } + } + + address, err := config.NewComputeClient(config.UserAgent).GlobalAddresses.Get(project, addressName).Do() + if err != nil { + t.Errorf("Error getting shared test global address %q: %s", addressName, err) + } + if address == nil { + t.Fatalf("Error getting shared test global address %q: is nil", addressName) + } + return address.Name +} + +// BootstrapSharedServiceNetworkingConnection will create a shared network +// if it hasn't been created in the test project, a global address +// if it hasn't been created in the test project, and a service networking connection +// if it hasn't been created in the test project. +// +// BootstrapSharedServiceNetworkingConnection returns a persistent compute network name +// for a test or set of tests. +// +// To delete a service networking conneciton, all of the service instances that use that connection +// must be deleted first. After the service instances are deleted, some service producers delay the deletion +// utnil a waiting period has passed. For example, after four days that you delete a SQL instance, +// the service networking connection can be deleted. +// That is the reason to use the shared service networking connection for thest resources. +// https://cloud.google.com/vpc/docs/configure-private-services-access#removing-connection +// +// testId specifies the test for which a shared network and a gobal address are used/initialized. +func BootstrapSharedServiceNetworkingConnection(t *testing.T, testId string) string { + parentService := "services/servicenetworking.googleapis.com" + project := envvar.GetTestProjectFromEnv() + projectNumber := envvar.GetTestProjectNumberFromEnv() + + config := BootstrapConfig(t) + if config == nil { + return "" + } + + networkName := BootstrapSharedTestNetwork(t, testId) + networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + globalAddressName := BootstrapSharedTestGlobalAddress(t, testId, networkId) + + readCall := config.NewServiceNetworkingClient(config.UserAgent).Services.Connections.List(parentService).Network(networkId) + if config.UserProjectOverride { + readCall.Header().Add("X-Goog-User-Project", project) + } + response, err := readCall.Do() + if err != nil { + t.Errorf("Error getting shared test service networking connection: %s", err) + } + + var connection *servicenetworking.Connection + for _, c := range response.Connections { + if c.Network == networkId { + connection = c + break + } + } + + if connection == nil { + log.Printf("[DEBUG] Service networking connection not found, bootstrapping") + + connection := &servicenetworking.Connection{ + Network: networkId, + ReservedPeeringRanges: []string{globalAddressName}, + } + + createCall := config.NewServiceNetworkingClient(config.UserAgent).Services.Connections.Create(parentService, connection) + if config.UserProjectOverride { + createCall.Header().Add("X-Goog-User-Project", project) + } + op, err := createCall.Do() + if err != nil { + t.Fatalf("Error bootstrapping shared test service networking connection: %s", err) + } + + log.Printf("[DEBUG] Waiting for service networking connection creation to finish") + if err := tpgservicenetworking.ServiceNetworkingOperationWaitTime(config, op, "Create Service Networking Connection", config.UserAgent, project, 4*time.Minute); err != nil { + t.Fatalf("Error bootstrapping shared test service networking connection: %s", err) + } + } + + log.Printf("[DEBUG] Getting shared test service networking connection") + + return networkName +} + var SharedServicePerimeterProjectPrefix = "tf-bootstrap-sp-" func BootstrapServicePerimeterProjects(t *testing.T, desiredProjects int) []*cloudresourcemanager.Project { diff --git a/google/acctest/test_utils.go b/google/acctest/test_utils.go index c6055045aa0..cac1ed64fc4 100644 --- a/google/acctest/test_utils.go +++ b/google/acctest/test_utils.go @@ -50,6 +50,12 @@ func CheckDataSourceStateMatchesResourceStateWithIgnores(dataSourceName, resourc if _, ok := ignoreFields[k]; ok { continue } + if _, ok := ignoreFields["labels.%"]; ok && strings.HasPrefix(k, "labels.") { + continue + } + if _, ok := ignoreFields["terraform_labels.%"]; ok && strings.HasPrefix(k, "terraform_labels.") { + continue + } if k == "%" { continue } diff --git a/google/fwmodels/provider_model.go b/google/fwmodels/provider_model.go index 2e79c52e6a2..8a4139c4c05 100644 --- a/google/fwmodels/provider_model.go +++ b/google/fwmodels/provider_model.go @@ -22,6 +22,7 @@ type ProviderModel struct { UserProjectOverride types.Bool `tfsdk:"user_project_override"` RequestTimeout types.String `tfsdk:"request_timeout"` RequestReason types.String `tfsdk:"request_reason"` + DefaultLabels types.Map `tfsdk:"default_labels"` // Generated Products AccessApprovalCustomEndpoint types.String `tfsdk:"access_approval_custom_endpoint"` @@ -50,7 +51,6 @@ type ProviderModel struct { Cloudfunctions2CustomEndpoint types.String `tfsdk:"cloudfunctions2_custom_endpoint"` CloudIdentityCustomEndpoint types.String `tfsdk:"cloud_identity_custom_endpoint"` CloudIdsCustomEndpoint types.String `tfsdk:"cloud_ids_custom_endpoint"` - CloudIotCustomEndpoint types.String `tfsdk:"cloud_iot_custom_endpoint"` CloudRunCustomEndpoint types.String `tfsdk:"cloud_run_custom_endpoint"` CloudRunV2CustomEndpoint types.String `tfsdk:"cloud_run_v2_custom_endpoint"` CloudSchedulerCustomEndpoint types.String `tfsdk:"cloud_scheduler_custom_endpoint"` @@ -79,7 +79,6 @@ type ProviderModel struct { EssentialContactsCustomEndpoint types.String `tfsdk:"essential_contacts_custom_endpoint"` FilestoreCustomEndpoint types.String `tfsdk:"filestore_custom_endpoint"` FirestoreCustomEndpoint types.String `tfsdk:"firestore_custom_endpoint"` - GameServicesCustomEndpoint types.String `tfsdk:"game_services_custom_endpoint"` GKEBackupCustomEndpoint types.String `tfsdk:"gke_backup_custom_endpoint"` GKEHubCustomEndpoint types.String `tfsdk:"gke_hub_custom_endpoint"` GKEHub2CustomEndpoint types.String `tfsdk:"gke_hub2_custom_endpoint"` diff --git a/google/fwprovider/framework_provider.go b/google/fwprovider/framework_provider.go index 70680cc422c..f1072a5e7d5 100644 --- a/google/fwprovider/framework_provider.go +++ b/google/fwprovider/framework_provider.go @@ -69,6 +69,7 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, path.MatchRoot("access_token"), }...), CredentialsValidator(), + NonEmptyStringValidator(), }, }, "access_token": schema.StringAttribute{ @@ -77,10 +78,14 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, stringvalidator.ConflictsWith(path.Expressions{ path.MatchRoot("credentials"), }...), + NonEmptyStringValidator(), }, }, "impersonate_service_account": schema.StringAttribute{ Optional: true, + Validators: []validator.String{ + NonEmptyStringValidator(), + }, }, "impersonate_service_account_delegates": schema.ListAttribute{ Optional: true, @@ -88,15 +93,27 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, }, "project": schema.StringAttribute{ Optional: true, + Validators: []validator.String{ + NonEmptyStringValidator(), + }, }, "billing_project": schema.StringAttribute{ Optional: true, + Validators: []validator.String{ + NonEmptyStringValidator(), + }, }, "region": schema.StringAttribute{ Optional: true, + Validators: []validator.String{ + NonEmptyStringValidator(), + }, }, "zone": schema.StringAttribute{ Optional: true, + Validators: []validator.String{ + NonEmptyStringValidator(), + }, }, "scopes": schema.ListAttribute{ Optional: true, @@ -111,6 +128,10 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, "request_reason": schema.StringAttribute{ Optional: true, }, + "default_labels": schema.MapAttribute{ + Optional: true, + ElementType: types.StringType, + }, // Generated Products "access_approval_custom_endpoint": &schema.StringAttribute{ @@ -269,12 +290,6 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, transport_tpg.CustomEndpointValidator(), }, }, - "cloud_iot_custom_endpoint": &schema.StringAttribute{ - Optional: true, - Validators: []validator.String{ - transport_tpg.CustomEndpointValidator(), - }, - }, "cloud_run_custom_endpoint": &schema.StringAttribute{ Optional: true, Validators: []validator.String{ @@ -443,12 +458,6 @@ func (p *FrameworkProvider) Schema(_ context.Context, _ provider.SchemaRequest, transport_tpg.CustomEndpointValidator(), }, }, - "game_services_custom_endpoint": &schema.StringAttribute{ - Optional: true, - Validators: []validator.String{ - transport_tpg.CustomEndpointValidator(), - }, - }, "gke_backup_custom_endpoint": &schema.StringAttribute{ Optional: true, Validators: []validator.String{ diff --git a/google/fwprovider/framework_provider_internal_test.go b/google/fwprovider/framework_provider_internal_test.go index a7ae67ede7b..27e5950e3d9 100644 --- a/google/fwprovider/framework_provider_internal_test.go +++ b/google/fwprovider/framework_provider_internal_test.go @@ -46,10 +46,11 @@ func TestFrameworkProvider_CredentialsValidator(t *testing.T) { return types.StringValue(stringContents) }, }, - "configuring credentials as an empty string is valid": { + "configuring credentials as an empty string is not valid": { ConfigValue: func(t *testing.T) types.String { return types.StringValue("") }, + ExpectedErrorCount: 1, }, "leaving credentials unconfigured is valid": { ConfigValue: func(t *testing.T) types.String { diff --git a/google/fwprovider/framework_validators.go b/google/fwprovider/framework_validators.go index 496668326af..02afbd3233f 100644 --- a/google/fwprovider/framework_validators.go +++ b/google/fwprovider/framework_validators.go @@ -9,7 +9,6 @@ import ( "time" "github.com/hashicorp/terraform-plugin-framework/schema/validator" - "github.com/hashicorp/terraform-plugin-framework/types" googleoauth "golang.org/x/oauth2/google" ) @@ -33,7 +32,7 @@ func (v credentialsValidator) MarkdownDescription(ctx context.Context) string { // ValidateString performs the validation. func (v credentialsValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { - if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() || request.ConfigValue.Equal(types.StringValue("")) { + if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { return } @@ -87,3 +86,34 @@ func (v nonnegativedurationValidator) ValidateString(ctx context.Context, reques func NonNegativeDurationValidator() validator.String { return nonnegativedurationValidator{} } + +// Non Empty String Validator +type nonEmptyStringValidator struct { +} + +// Description describes the validation in plain text formatting. +func (v nonEmptyStringValidator) Description(_ context.Context) string { + return "value expected to be a string that isn't an empty string" +} + +// MarkdownDescription describes the validation in Markdown formatting. +func (v nonEmptyStringValidator) MarkdownDescription(ctx context.Context) string { + return v.Description(ctx) +} + +// ValidateString performs the validation. +func (v nonEmptyStringValidator) ValidateString(ctx context.Context, request validator.StringRequest, response *validator.StringResponse) { + if request.ConfigValue.IsNull() || request.ConfigValue.IsUnknown() { + return + } + + value := request.ConfigValue.ValueString() + + if value == "" { + response.Diagnostics.AddError("expected a non-empty string", fmt.Sprintf("%s was set to `%s`", request.Path, value)) + } +} + +func NonEmptyStringValidator() validator.String { + return nonEmptyStringValidator{} +} diff --git a/google/fwresource/framework_location.go b/google/fwresource/framework_location.go index 44ed5803dec..893511a9df6 100644 --- a/google/fwresource/framework_location.go +++ b/google/fwresource/framework_location.go @@ -34,12 +34,14 @@ type LocationDescription struct { func (ld *LocationDescription) GetLocation() (types.String, error) { // Location from resource config if !ld.ResourceLocation.IsNull() && !ld.ResourceLocation.IsUnknown() && !ld.ResourceLocation.Equal(types.StringValue("")) { - return ld.ResourceLocation, nil + location := tpgresource.GetResourceNameFromSelfLink(ld.ResourceLocation.ValueString()) // Location could be a self link + return types.StringValue(location), nil } // Location from region in resource config if !ld.ResourceRegion.IsNull() && !ld.ResourceRegion.IsUnknown() && !ld.ResourceRegion.Equal(types.StringValue("")) { - return ld.ResourceRegion, nil + region := tpgresource.GetResourceNameFromSelfLink(ld.ResourceRegion.ValueString()) // Region could be a self link + return types.StringValue(region), nil } // Location from zone in resource config @@ -48,9 +50,16 @@ func (ld *LocationDescription) GetLocation() (types.String, error) { return types.StringValue(location), nil } + // Location from region in provider config + if !ld.ProviderRegion.IsNull() && !ld.ProviderRegion.IsUnknown() && !ld.ProviderRegion.Equal(types.StringValue("")) { + location := tpgresource.GetResourceNameFromSelfLink(ld.ProviderRegion.ValueString()) // Region could be a self link + return types.StringValue(location), nil + } + // Location from zone in provider config if !ld.ProviderZone.IsNull() && !ld.ProviderZone.IsUnknown() && !ld.ProviderZone.Equal(types.StringValue("")) { - return ld.ProviderZone, nil + location := tpgresource.GetResourceNameFromSelfLink(ld.ProviderZone.ValueString()) // Zone could be a self link + return types.StringValue(location), nil } var err error @@ -73,17 +82,18 @@ func (ld *LocationDescription) GetRegion() (types.String, error) { } // Region from zone in resource config if !ld.ResourceZone.IsNull() && !ld.ResourceZone.IsUnknown() && !ld.ResourceZone.Equal(types.StringValue("")) { - region := tpgresource.GetRegionFromZone(ld.ResourceZone.ValueString()) - return types.StringValue(region), nil + region := tpgresource.GetResourceNameFromSelfLink(ld.ResourceZone.ValueString()) // Region could be a self link + return types.StringValue(tpgresource.GetRegionFromZone(region)), nil } // Region from provider config if !ld.ProviderRegion.IsNull() && !ld.ProviderRegion.IsUnknown() && !ld.ProviderRegion.Equal(types.StringValue("")) { - return ld.ProviderRegion, nil + region := tpgresource.GetResourceNameFromSelfLink(ld.ProviderRegion.ValueString()) // Region could be a self link + return types.StringValue(region), nil } // Region from zone in provider config if !ld.ProviderZone.IsNull() && !ld.ProviderZone.IsUnknown() && !ld.ProviderZone.Equal(types.StringValue("")) { - region := tpgresource.GetRegionFromZone(ld.ProviderZone.ValueString()) - return types.StringValue(region), nil + region := tpgresource.GetResourceNameFromSelfLink(ld.ProviderZone.ValueString()) // Region could be a self link + return types.StringValue(tpgresource.GetRegionFromZone(region)), nil } var err error @@ -105,7 +115,9 @@ func (ld *LocationDescription) GetZone() (types.String, error) { return types.StringValue(zone), nil } if !ld.ProviderZone.IsNull() && !ld.ProviderZone.IsUnknown() && !ld.ProviderZone.Equal(types.StringValue("")) { - return ld.ProviderZone, nil + // Zone could be a self link + zone := tpgresource.GetResourceNameFromSelfLink(ld.ProviderZone.ValueString()) + return types.StringValue(zone), nil } var err error diff --git a/google/fwresource/framework_location_test.go b/google/fwresource/framework_location_test.go index de8f846dcc1..a790bcdd975 100644 --- a/google/fwresource/framework_location_test.go +++ b/google/fwresource/framework_location_test.go @@ -128,11 +128,23 @@ func TestLocationDescription_GetRegion(t *testing.T) { }, ExpectedRegion: types.StringValue("provider-zone"), // is truncated }, - "does not shorten region values when derived from a zone self link set in the resource config": { + "shortens region values when derived from a zone self link set in the resource config": { ld: LocationDescription{ ResourceZone: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a"), }, - ExpectedRegion: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1"), // Value isn't shortened from URI to name + ExpectedRegion: types.StringValue("us-central1"), + }, + "shortens region values set as self links in the provider config": { + ld: LocationDescription{ + ProviderRegion: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/regions/us-central1"), + }, + ExpectedRegion: types.StringValue("us-central1"), + }, + "shortens region values when derived from a zone self link set in the provider config": { + ld: LocationDescription{ + ProviderZone: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a"), + }, + ExpectedRegion: types.StringValue("us-central1"), }, "returns the value of the region field in provider config when region/zone is unset in resource config": { ld: LocationDescription{ @@ -230,11 +242,11 @@ func TestLocationDescription_GetLocation(t *testing.T) { }, ExpectedLocation: types.StringValue("resource-location"), }, - "does not shorten the location value when it is set as a self link in the resource config": { + "shortens the location value when it is set as a self link in the resource config": { ld: LocationDescription{ ResourceLocation: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/locations/resource-location"), }, - ExpectedLocation: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/locations/resource-location"), + ExpectedLocation: types.StringValue("resource-location"), }, "returns the region value set in the resource config when location is not in the schema": { ld: LocationDescription{ @@ -243,11 +255,11 @@ func TestLocationDescription_GetLocation(t *testing.T) { }, ExpectedLocation: types.StringValue("resource-region"), }, - "does not shorten the region value when it is set as a self link in the resource config": { + "shortens the region value when it is set as a self link in the resource config": { ld: LocationDescription{ ResourceRegion: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/regions/resource-region"), }, - ExpectedLocation: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/regions/resource-region"), + ExpectedLocation: types.StringValue("resource-region"), }, "returns the zone value set in the resource config when neither location nor region in the schema": { ld: LocationDescription{ @@ -261,18 +273,24 @@ func TestLocationDescription_GetLocation(t *testing.T) { }, ExpectedLocation: types.StringValue("resource-zone-a"), }, - "returns the zone value from the provider config when none of location/region/zone are set in the resource config": { + "returns the region value from the provider config when none of location/region/zone are set in the resource config": { ld: LocationDescription{ - ProviderRegion: types.StringValue("provider-region"), // unused + ProviderRegion: types.StringValue("provider-region"), // Preferred to use region value over zone value if both are set ProviderZone: types.StringValue("provider-zone-a"), }, + ExpectedLocation: types.StringValue("provider-region"), + }, + "returns the zone value from the provider config when none of location/region/zone are set in the resource config and region is not set in the provider config": { + ld: LocationDescription{ + ProviderZone: types.StringValue("provider-zone-a"), + }, ExpectedLocation: types.StringValue("provider-zone-a"), }, - "does not shorten the zone value when it is set as a self link in the provider config": { + "shortens the zone value when it is set as a self link in the provider config": { ld: LocationDescription{ ProviderZone: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/zones/provider-zone-a"), }, - ExpectedLocation: types.StringValue("https://www.googleapis.com/compute/v1/projects/my-project/zones/provider-zone-a"), + ExpectedLocation: types.StringValue("provider-zone-a"), }, // Handling of empty strings "returns the region value set in the resource config when location is an empty string": { @@ -299,13 +317,6 @@ func TestLocationDescription_GetLocation(t *testing.T) { }, ExpectedLocation: types.StringValue("provider-zone-a"), }, - // Error states - "does not use the region value set in the provider config": { - ld: LocationDescription{ - ProviderRegion: types.StringValue("provider-region"), - }, - ExpectedError: true, - }, "returns an error when none of location/region/zone are set on the resource, and neither region or zone is set on the provider": { ExpectedError: true, }, diff --git a/google/fwtransport/framework_config.go b/google/fwtransport/framework_config.go index e66e7133d49..234ad5f2c20 100644 --- a/google/fwtransport/framework_config.go +++ b/google/fwtransport/framework_config.go @@ -18,7 +18,6 @@ import ( "google.golang.org/grpc" "github.com/hashicorp/go-cleanhttp" - "github.com/hashicorp/terraform-plugin-framework/attr" "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-log/tflog" @@ -75,7 +74,6 @@ type FrameworkProviderConfig struct { Cloudfunctions2BasePath string CloudIdentityBasePath string CloudIdsBasePath string - CloudIotBasePath string CloudRunBasePath string CloudRunV2BasePath string CloudSchedulerBasePath string @@ -104,7 +102,6 @@ type FrameworkProviderConfig struct { EssentialContactsBasePath string FilestoreBasePath string FirestoreBasePath string - GameServicesBasePath string GKEBackupBasePath string GKEHubBasePath string GKEHub2BasePath string @@ -154,14 +151,6 @@ type FrameworkProviderConfig struct { // it is pulled out so that we can manually call this from our testing provider as well func (p *FrameworkProviderConfig) LoadAndValidateFramework(ctx context.Context, data *fwmodels.ProviderModel, tfVersion string, diags *diag.Diagnostics, providerversion string) { - // Make the plugin framwork code behave like the SDK by ignoring zero values. This means re-setting zero values to null. - // This is added to fix https://github.com/hashicorp/terraform-provider-google/issues/14255 in a v4.x.x release - // TODO(SarahFrench) remove as part of https://github.com/hashicorp/terraform-provider-google/issues/14447 in 5.0.0 - p.HandleZeroValues(ctx, data, diags) - if diags.HasError() { - return - } - // Set defaults if needed p.HandleDefaults(ctx, data, diags) if diags.HasError() { @@ -221,7 +210,6 @@ func (p *FrameworkProviderConfig) LoadAndValidateFramework(ctx context.Context, p.Cloudfunctions2BasePath = data.Cloudfunctions2CustomEndpoint.ValueString() p.CloudIdentityBasePath = data.CloudIdentityCustomEndpoint.ValueString() p.CloudIdsBasePath = data.CloudIdsCustomEndpoint.ValueString() - p.CloudIotBasePath = data.CloudIotCustomEndpoint.ValueString() p.CloudRunBasePath = data.CloudRunCustomEndpoint.ValueString() p.CloudRunV2BasePath = data.CloudRunV2CustomEndpoint.ValueString() p.CloudSchedulerBasePath = data.CloudSchedulerCustomEndpoint.ValueString() @@ -250,7 +238,6 @@ func (p *FrameworkProviderConfig) LoadAndValidateFramework(ctx context.Context, p.EssentialContactsBasePath = data.EssentialContactsCustomEndpoint.ValueString() p.FilestoreBasePath = data.FilestoreCustomEndpoint.ValueString() p.FirestoreBasePath = data.FirestoreCustomEndpoint.ValueString() - p.GameServicesBasePath = data.GameServicesCustomEndpoint.ValueString() p.GKEBackupBasePath = data.GKEBackupCustomEndpoint.ValueString() p.GKEHubBasePath = data.GKEHubCustomEndpoint.ValueString() p.GKEHub2BasePath = data.GKEHub2CustomEndpoint.ValueString() @@ -307,77 +294,6 @@ func (p *FrameworkProviderConfig) LoadAndValidateFramework(ctx context.Context, p.RequestBatcherIam = transport_tpg.NewRequestBatcher("IAM", ctx, batchingConfig) } -// HandleZeroValues will make the plugin framework act like the SDK; zero value, particularly empty strings, are converted to null. -// This causes the plugin framework to treat the field as unset, just like how the SDK ignores empty strings. -func (p *FrameworkProviderConfig) HandleZeroValues(ctx context.Context, data *fwmodels.ProviderModel, diags *diag.Diagnostics) { - - // Change empty strings to null values - if data.AccessToken.Equal(types.StringValue("")) { - data.AccessToken = types.StringNull() - } - if data.BillingProject.Equal(types.StringValue("")) { - data.BillingProject = types.StringNull() - } - if data.Credentials.Equal(types.StringValue("")) { - data.Credentials = types.StringNull() - } - if data.ImpersonateServiceAccount.Equal(types.StringValue("")) { - data.ImpersonateServiceAccount = types.StringNull() - } - if data.Project.Equal(types.StringValue("")) { - data.Project = types.StringNull() - } - if data.Region.Equal(types.StringValue("")) { - data.Region = types.StringNull() - } - if data.RequestReason.Equal(types.StringValue("")) { - data.RequestReason = types.StringNull() - } - if data.RequestTimeout.Equal(types.StringValue("")) { - data.RequestTimeout = types.StringNull() - } - if data.Zone.Equal(types.StringValue("")) { - data.Zone = types.StringNull() - } - - // Change lists that aren't null or unknown with length of zero to null lists - if !data.Scopes.IsNull() && !data.Scopes.IsUnknown() && (len(data.Scopes.Elements()) == 0) { - data.Scopes = types.ListNull(types.StringType) - } - if !data.ImpersonateServiceAccountDelegates.IsNull() && !data.ImpersonateServiceAccountDelegates.IsUnknown() && (len(data.ImpersonateServiceAccountDelegates.Elements()) == 0) { - data.ImpersonateServiceAccountDelegates = types.ListNull(types.StringType) - } - - // Batching implementation will change in future, but this code will be removed in 5.0.0 so may be unaffected - if !data.Batching.IsNull() && !data.Batching.IsUnknown() && (len(data.Batching.Elements()) > 0) { - var pbConfigs []fwmodels.ProviderBatching - d := data.Batching.ElementsAs(ctx, &pbConfigs, true) - diags.Append(d...) - if diags.HasError() { - return - } - if pbConfigs[0].SendAfter.Equal(types.StringValue("")) { - pbConfigs[0].SendAfter = types.StringNull() // Convert empty string to null - } - b, _ := types.ObjectValue( - map[string]attr.Type{ - "enable_batching": types.BoolType, - "send_after": types.StringType, - }, - map[string]attr.Value{ - "enable_batching": pbConfigs[0].EnableBatching, - "send_after": pbConfigs[0].SendAfter, - }, - ) - newBatching, d := types.ListValue(types.ObjectType{}.WithAttributeTypes(fwmodels.ProviderBatchingAttributes), []attr.Value{b}) - diags.Append(d...) - if diags.HasError() { - return - } - data.Batching = newBatching - } -} - // HandleDefaults will handle all the defaults necessary in the provider func (p *FrameworkProviderConfig) HandleDefaults(ctx context.Context, data *fwmodels.ProviderModel, diags *diag.Diagnostics) { if (data.AccessToken.IsNull() || data.AccessToken.IsUnknown()) && (data.Credentials.IsNull() || data.Credentials.IsUnknown()) { @@ -698,14 +614,6 @@ func (p *FrameworkProviderConfig) HandleDefaults(ctx context.Context, data *fwmo data.CloudIdsCustomEndpoint = types.StringValue(customEndpoint.(string)) } } - if data.CloudIotCustomEndpoint.IsNull() { - customEndpoint := transport_tpg.MultiEnvDefault([]string{ - "GOOGLE_CLOUD_IOT_CUSTOM_ENDPOINT", - }, transport_tpg.DefaultBasePaths[transport_tpg.CloudIotBasePathKey]) - if customEndpoint != nil { - data.CloudIotCustomEndpoint = types.StringValue(customEndpoint.(string)) - } - } if data.CloudRunCustomEndpoint.IsNull() { customEndpoint := transport_tpg.MultiEnvDefault([]string{ "GOOGLE_CLOUD_RUN_CUSTOM_ENDPOINT", @@ -930,14 +838,6 @@ func (p *FrameworkProviderConfig) HandleDefaults(ctx context.Context, data *fwmo data.FirestoreCustomEndpoint = types.StringValue(customEndpoint.(string)) } } - if data.GameServicesCustomEndpoint.IsNull() { - customEndpoint := transport_tpg.MultiEnvDefault([]string{ - "GOOGLE_GAME_SERVICES_CUSTOM_ENDPOINT", - }, transport_tpg.DefaultBasePaths[transport_tpg.GameServicesBasePathKey]) - if customEndpoint != nil { - data.GameServicesCustomEndpoint = types.StringValue(customEndpoint.(string)) - } - } if data.GKEBackupCustomEndpoint.IsNull() { customEndpoint := transport_tpg.MultiEnvDefault([]string{ "GOOGLE_GKE_BACKUP_CUSTOM_ENDPOINT", diff --git a/google/fwtransport/framework_config_test.go b/google/fwtransport/framework_config_test.go index 929af2c9985..45d2e546899 100644 --- a/google/fwtransport/framework_config_test.go +++ b/google/fwtransport/framework_config_test.go @@ -103,22 +103,22 @@ func TestFrameworkProvider_LoadAndValidateFramework_project(t *testing.T) { ExpectedConfigStructValue: types.StringNull(), }, // Handling empty strings in config - "when project is set as an empty string the field is treated as if it's unset, without error": { + "when project is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ Project: types.StringValue(""), }, - ExpectedDataModelValue: types.StringNull(), - ExpectedConfigStructValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, - "when project is set as an empty string an environment variable will be used": { + "when project is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ Project: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_PROJECT": "project-from-GOOGLE_PROJECT", }, - ExpectedDataModelValue: types.StringValue("project-from-GOOGLE_PROJECT"), - ExpectedConfigStructValue: types.StringValue("project-from-GOOGLE_PROJECT"), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, // Handling unknown values "when project is an unknown value, the provider treats it as if it's unset and uses an environment variable instead": { @@ -265,15 +265,15 @@ func TestFrameworkProvider_LoadAndValidateFramework_credentials(t *testing.T) { }, ExpectedDataModelValue: types.StringNull(), }, - // Handling empty strings in config - "when credentials is set to an empty string in the config (and access_token unset), GOOGLE_APPLICATION_CREDENTIALS is used": { + // Error states + "when credentials is set to an empty string in the config the value isn't ignored and results in an error": { ConfigValues: fwmodels.ProviderModel{ Credentials: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_APPLICATION_CREDENTIALS": transport_tpg.TestFakeCredentialsPath, // needs to be a path to a file when used by code }, - ExpectedDataModelValue: types.StringNull(), + ExpectError: true, }, // NOTE: these tests can't run in Cloud Build due to ADC locating credentials despite `GOOGLE_APPLICATION_CREDENTIALS` being unset // See https://cloud.google.com/docs/authentication/application-default-credentials#search_order @@ -436,22 +436,22 @@ func TestFrameworkProvider_LoadAndValidateFramework_billingProject(t *testing.T) ExpectedConfigStructValue: types.StringNull(), }, // Handling empty strings in config - "when billing_project is set as an empty string the field is treated as if it's unset, without error": { + "when billing_project is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ BillingProject: types.StringValue(""), }, - ExpectedDataModelValue: types.StringNull(), - ExpectedConfigStructValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, - "when billing_project is set as an empty string an environment variable will be used": { + "when billing_project is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ BillingProject: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_BILLING_PROJECT": "billing-project-from-env", }, - ExpectedDataModelValue: types.StringValue("billing-project-from-env"), - ExpectedConfigStructValue: types.StringValue("billing-project-from-env"), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, } @@ -550,22 +550,22 @@ func TestFrameworkProvider_LoadAndValidateFramework_region(t *testing.T) { ExpectedConfigStructValue: types.StringNull(), }, // Handling empty strings in config - "when region is set as an empty string the field is treated as if it's unset, without error": { + "when region is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ Region: types.StringValue(""), }, - ExpectedDataModelValue: types.StringNull(), - ExpectedConfigStructValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, - "when region is set as an empty string an environment variable will be used": { + "when region is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ Region: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_REGION": "region-from-env", }, - ExpectedDataModelValue: types.StringValue("region-from-env"), - ExpectedConfigStructValue: types.StringValue("region-from-env"), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, // Handling unknown values "when region is an unknown value, the provider treats it as if it's unset and uses an environment variable instead": { @@ -700,22 +700,22 @@ func TestFrameworkProvider_LoadAndValidateFramework_zone(t *testing.T) { ExpectedConfigStructValue: types.StringNull(), }, // Handling empty strings in config - "when zone is set as an empty string the field is treated as if it's unset, without error": { + "when zone is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ Zone: types.StringValue(""), }, - ExpectedDataModelValue: types.StringNull(), - ExpectedConfigStructValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, - "when zone is set as an empty string an environment variable will be used": { + "when zone is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ Zone: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_ZONE": "zone-from-env", }, - ExpectedDataModelValue: types.StringValue("zone-from-env"), - ExpectedConfigStructValue: types.StringValue("zone-from-env"), + ExpectedDataModelValue: types.StringValue(""), + ExpectedConfigStructValue: types.StringValue(""), }, // Handling unknown values "when zone is an unknown value, the provider treats it as if it's unset and uses an environment variable instead": { @@ -817,21 +817,20 @@ func TestFrameworkProvider_LoadAndValidateFramework_accessToken(t *testing.T) { ExpectedDataModelValue: types.StringNull(), }, // Handling empty strings in config - "when access_token is set as an empty string the field is treated as if it's unset, without error (as long as credentials supplied in its absence)": { + "when access_token is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ AccessToken: types.StringValue(""), - Credentials: types.StringValue(transport_tpg.TestFakeCredentialsPath), }, - ExpectedDataModelValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), }, - "when access_token is set as an empty string in the config, an environment variable is used": { + "when access_token is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ AccessToken: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_OAUTH_ACCESS_TOKEN": "value-from-GOOGLE_OAUTH_ACCESS_TOKEN", }, - ExpectedDataModelValue: types.StringValue("value-from-GOOGLE_OAUTH_ACCESS_TOKEN"), + ExpectedDataModelValue: types.StringValue(""), }, // Handling unknown values "when access_token is an unknown value, the provider treats it as if it's unset and uses an environment variable instead": { @@ -1060,20 +1059,20 @@ func TestFrameworkProvider_LoadAndValidateFramework_impersonateServiceAccount(t ExpectedDataModelValue: types.StringNull(), }, // Handling empty strings in config - "when impersonate_service_account is set as an empty string the field is treated as if it's unset, without error": { + "when impersonate_service_account is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ ImpersonateServiceAccount: types.StringValue(""), }, - ExpectedDataModelValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), }, - "when impersonate_service_account is set as an empty string in the config, an environment variable is used": { + "when impersonate_service_account is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ ImpersonateServiceAccount: types.StringValue(""), }, EnvVariables: map[string]string{ "GOOGLE_IMPERSONATE_SERVICE_ACCOUNT": "value-from-env@example.com", }, - ExpectedDataModelValue: types.StringValue("value-from-env@example.com"), + ExpectedDataModelValue: types.StringValue(""), }, // Handling unknown values "when impersonate_service_account is an unknown value, the provider treats it as if it's unset and uses an environment variable instead": { @@ -1164,9 +1163,9 @@ func TestFrameworkProvider_LoadAndValidateFramework_impersonateServiceAccountDel ExpectedNull: true, }, // Handling empty values in config - "when impersonate_service_account_delegates is set as an empty array the field is treated as if it's unset, without error": { + "when impersonate_service_account_delegates is set as an empty array, that value isn't ignored": { ImpersonateServiceAccountDelegatesValue: []string{}, - ExpectedDataModelValue: nil, + ExpectedDataModelValue: []string{}, }, // Handling unknown values "when impersonate_service_account_delegates is an unknown value, the provider treats it as if it's unset, without error": { @@ -1373,20 +1372,20 @@ func TestFrameworkProvider_LoadAndValidateFramework_requestReason(t *testing.T) ExpectedDataModelValue: types.StringNull(), }, // Handling empty strings in config - "when request_reason is set as an empty string in the config it is overridden by environment variables": { + "when request_reason is set as an empty string, the empty string is not ignored in favor of an environment variable": { ConfigValues: fwmodels.ProviderModel{ RequestReason: types.StringValue(""), }, EnvVariables: map[string]string{ "CLOUDSDK_CORE_REQUEST_REASON": "foo", }, - ExpectedDataModelValue: types.StringValue("foo"), + ExpectedDataModelValue: types.StringValue(""), }, - "when request_reason is set as an empty string in the config the field is treated as if it's unset, without error": { + "when request_reason is set as an empty string the empty string is used and not ignored": { ConfigValues: fwmodels.ProviderModel{ RequestReason: types.StringValue(""), }, - ExpectedDataModelValue: types.StringNull(), + ExpectedDataModelValue: types.StringValue(""), }, // Handling unknown values "when request_reason is an unknown value, the provider treats it as if it's unset and uses an environment variable instead": { @@ -1468,6 +1467,12 @@ func TestFrameworkProvider_LoadAndValidateFramework_requestTimeout(t *testing.T) }, ExpectError: true, }, + "when request_timeout is set as an empty string, the empty string isn't ignored and an error will occur": { + ConfigValues: fwmodels.ProviderModel{ + RequestTimeout: types.StringValue(""), + }, + ExpectError: true, + }, // In the SDK version of the provider config code, this scenario results in a value of "0s" // instead of "120s", but the final 'effective' value is also "120s" // See : https://github.com/hashicorp/terraform-provider-google/blob/09cb850ee64bcd78e4457df70905530c1ed75f19/google/transport/config.go#L1228-L1233 @@ -1477,13 +1482,6 @@ func TestFrameworkProvider_LoadAndValidateFramework_requestTimeout(t *testing.T) }, ExpectedDataModelValue: types.StringValue("120s"), }, - // Handling empty strings in config - "when request_timeout is set as an empty string, the default value is 120s.": { - ConfigValues: fwmodels.ProviderModel{ - RequestTimeout: types.StringValue(""), - }, - ExpectedDataModelValue: types.StringValue("120s"), - }, // Handling unknown values "when request_timeout is an unknown value, the provider treats it as if it's unset and uses the default value 120s": { ConfigValues: fwmodels.ProviderModel{ @@ -1587,13 +1585,6 @@ func TestFrameworkProvider_LoadAndValidateFramework_batching(t *testing.T) { ExpectEnableBatchingValue: types.BoolValue(true), ExpectSendAfterValue: types.StringValue("3s"), }, - // Handling empty strings in config - "when batching is configured with send_after as an empty string, send_after will be set to a default value": { - EnableBatchingValue: types.BoolValue(true), - SendAfterValue: types.StringValue(""), - ExpectEnableBatchingValue: types.BoolValue(true), - ExpectSendAfterValue: types.StringValue("10s"), // When batching block is present but has missing arguments inside, default is 10s - }, // Handling unknown values "when batching is an unknown value, the provider treats it as if it's unset (align to SDK behaviour)": { SetBatchingAsUnknown: true, @@ -1613,6 +1604,11 @@ func TestFrameworkProvider_LoadAndValidateFramework_batching(t *testing.T) { ExpectSendAfterValue: types.StringValue("45s"), }, // Error states + "when batching is configured with send_after as an empty string, the empty string is not ignored and results in an error": { + EnableBatchingValue: types.BoolValue(true), + SendAfterValue: types.StringValue(""), + ExpectError: true, + }, "if batching is configured with send_after as an invalid value, there's an error": { SendAfterValue: types.StringValue("invalid value"), ExpectError: true, diff --git a/google/provider/provider.go b/google/provider/provider.go index e9164a44f4b..20b099e1bb1 100644 --- a/google/provider/provider.go +++ b/google/provider/provider.go @@ -38,7 +38,6 @@ import ( "github.com/hashicorp/terraform-provider-google/google/services/cloudfunctions2" "github.com/hashicorp/terraform-provider-google/google/services/cloudidentity" "github.com/hashicorp/terraform-provider-google/google/services/cloudids" - "github.com/hashicorp/terraform-provider-google/google/services/cloudiot" "github.com/hashicorp/terraform-provider-google/google/services/cloudrun" "github.com/hashicorp/terraform-provider-google/google/services/cloudrunv2" "github.com/hashicorp/terraform-provider-google/google/services/cloudscheduler" @@ -67,7 +66,6 @@ import ( "github.com/hashicorp/terraform-provider-google/google/services/essentialcontacts" "github.com/hashicorp/terraform-provider-google/google/services/filestore" "github.com/hashicorp/terraform-provider-google/google/services/firestore" - "github.com/hashicorp/terraform-provider-google/google/services/gameservices" "github.com/hashicorp/terraform-provider-google/google/services/gkebackup" "github.com/hashicorp/terraform-provider-google/google/services/gkehub" "github.com/hashicorp/terraform-provider-google/google/services/gkehub2" @@ -120,8 +118,6 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgiamresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/terraform-provider-google/google/verify" - - googleoauth "golang.org/x/oauth2/google" ) // Provider returns a *schema.Provider. @@ -150,12 +146,14 @@ func Provider() *schema.Provider { "access_token": { Type: schema.TypeString, Optional: true, + ValidateFunc: ValidateEmptyStrings, ConflictsWith: []string{"credentials"}, }, "impersonate_service_account": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: ValidateEmptyStrings, }, "impersonate_service_account_delegates": { @@ -165,23 +163,27 @@ func Provider() *schema.Provider { }, "project": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: ValidateEmptyStrings, }, "billing_project": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: ValidateEmptyStrings, }, "region": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: ValidateEmptyStrings, }, "zone": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + ValidateFunc: ValidateEmptyStrings, }, "scopes": { @@ -224,6 +226,12 @@ func Provider() *schema.Provider { Optional: true, }, + "default_labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + // Generated Products "access_approval_custom_endpoint": { Type: schema.TypeString, @@ -355,11 +363,6 @@ func Provider() *schema.Provider { Optional: true, ValidateFunc: transport_tpg.ValidateCustomEndpoint, }, - "cloud_iot_custom_endpoint": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: transport_tpg.ValidateCustomEndpoint, - }, "cloud_run_custom_endpoint": { Type: schema.TypeString, Optional: true, @@ -500,11 +503,6 @@ func Provider() *schema.Provider { Optional: true, ValidateFunc: transport_tpg.ValidateCustomEndpoint, }, - "game_services_custom_endpoint": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: transport_tpg.ValidateCustomEndpoint, - }, "gke_backup_custom_endpoint": { Type: schema.TypeString, Optional: true, @@ -836,7 +834,6 @@ func DatasourceMapWithErrors() (map[string]*schema.Resource, error) { "google_container_registry_repository": containeranalysis.DataSourceGoogleContainerRepo(), "google_dataproc_metastore_service": dataprocmetastore.DataSourceDataprocMetastoreService(), "google_datastream_static_ips": datastream.DataSourceGoogleDatastreamStaticIps(), - "google_game_services_game_server_deployment_rollout": gameservices.DataSourceGameServicesGameServerDeploymentRollout(), "google_iam_policy": resourcemanager.DataSourceGoogleIamPolicy(), "google_iam_role": resourcemanager.DataSourceGoogleIamRole(), "google_iam_testable_permissions": resourcemanager.DataSourceGoogleIamTestablePermissions(), @@ -914,7 +911,6 @@ func DatasourceMapWithErrors() (map[string]*schema.Resource, error) { "google_cloudbuildv2_connection_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudbuildv2.Cloudbuildv2ConnectionIamSchema, cloudbuildv2.Cloudbuildv2ConnectionIamUpdaterProducer), "google_cloudfunctions_function_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudfunctions.CloudFunctionsCloudFunctionIamSchema, cloudfunctions.CloudFunctionsCloudFunctionIamUpdaterProducer), "google_cloudfunctions2_function_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudfunctions2.Cloudfunctions2functionIamSchema, cloudfunctions2.Cloudfunctions2functionIamUpdaterProducer), - "google_cloudiot_registry_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudiot.CloudIotDeviceRegistryIamSchema, cloudiot.CloudIotDeviceRegistryIamUpdaterProducer), "google_cloud_run_service_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudrun.CloudRunServiceIamSchema, cloudrun.CloudRunServiceIamUpdaterProducer), "google_cloud_run_v2_job_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudrunv2.CloudRunV2JobIamSchema, cloudrunv2.CloudRunV2JobIamUpdaterProducer), "google_cloud_run_v2_service_iam_policy": tpgiamresource.DataSourceIamPolicy(cloudrunv2.CloudRunV2ServiceIamSchema, cloudrunv2.CloudRunV2ServiceIamUpdaterProducer), @@ -994,9 +990,9 @@ func DatasourceMapWithErrors() (map[string]*schema.Resource, error) { }) } -// Generated resources: 328 -// Generated IAM resources: 210 -// Total generated resources: 538 +// Generated resources: 321 +// Generated IAM resources: 207 +// Total generated resources: 528 func ResourceMap() map[string]*schema.Resource { resourceMap, _ := ResourceMapWithErrors() return resourceMap @@ -1121,11 +1117,6 @@ func ResourceMapWithErrors() (map[string]*schema.Resource, error) { "google_cloud_identity_group": cloudidentity.ResourceCloudIdentityGroup(), "google_cloud_identity_group_membership": cloudidentity.ResourceCloudIdentityGroupMembership(), "google_cloud_ids_endpoint": cloudids.ResourceCloudIdsEndpoint(), - "google_cloudiot_device": cloudiot.ResourceCloudIotDevice(), - "google_cloudiot_registry": cloudiot.ResourceCloudIotDeviceRegistry(), - "google_cloudiot_registry_iam_binding": tpgiamresource.ResourceIamBinding(cloudiot.CloudIotDeviceRegistryIamSchema, cloudiot.CloudIotDeviceRegistryIamUpdaterProducer, cloudiot.CloudIotDeviceRegistryIdParseFunc), - "google_cloudiot_registry_iam_member": tpgiamresource.ResourceIamMember(cloudiot.CloudIotDeviceRegistryIamSchema, cloudiot.CloudIotDeviceRegistryIamUpdaterProducer, cloudiot.CloudIotDeviceRegistryIdParseFunc), - "google_cloudiot_registry_iam_policy": tpgiamresource.ResourceIamPolicy(cloudiot.CloudIotDeviceRegistryIamSchema, cloudiot.CloudIotDeviceRegistryIamUpdaterProducer, cloudiot.CloudIotDeviceRegistryIdParseFunc), "google_cloud_run_domain_mapping": cloudrun.ResourceCloudRunDomainMapping(), "google_cloud_run_service": cloudrun.ResourceCloudRunService(), "google_cloud_run_service_iam_binding": tpgiamresource.ResourceIamBinding(cloudrun.CloudRunServiceIamSchema, cloudrun.CloudRunServiceIamUpdaterProducer, cloudrun.CloudRunServiceIdParseFunc), @@ -1328,11 +1319,6 @@ func ResourceMapWithErrors() (map[string]*schema.Resource, error) { "google_firestore_document": firestore.ResourceFirestoreDocument(), "google_firestore_field": firestore.ResourceFirestoreField(), "google_firestore_index": firestore.ResourceFirestoreIndex(), - "google_game_services_game_server_cluster": gameservices.ResourceGameServicesGameServerCluster(), - "google_game_services_game_server_config": gameservices.ResourceGameServicesGameServerConfig(), - "google_game_services_game_server_deployment": gameservices.ResourceGameServicesGameServerDeployment(), - "google_game_services_game_server_deployment_rollout": gameservices.ResourceGameServicesGameServerDeploymentRollout(), - "google_game_services_realm": gameservices.ResourceGameServicesRealm(), "google_gke_backup_backup_plan": gkebackup.ResourceGKEBackupBackupPlan(), "google_gke_backup_backup_plan_iam_binding": tpgiamresource.ResourceIamBinding(gkebackup.GKEBackupBackupPlanIamSchema, gkebackup.GKEBackupBackupPlanIamUpdaterProducer, gkebackup.GKEBackupBackupPlanIdParseFunc), "google_gke_backup_backup_plan_iam_member": tpgiamresource.ResourceIamMember(gkebackup.GKEBackupBackupPlanIamSchema, gkebackup.GKEBackupBackupPlanIamUpdaterProducer, gkebackup.GKEBackupBackupPlanIdParseFunc), @@ -1761,6 +1747,13 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr config.Scopes[i] = scope.(string) } + config.DefaultLabels = make(map[string]string) + defaultLabels := d.Get("default_labels").(map[string]interface{}) + + for k, v := range defaultLabels { + config.DefaultLabels[k] = v.(string) + } + batchCfg, err := transport_tpg.ExpandProviderBatchingConfig(d.Get("batching")) if err != nil { return nil, diag.FromErr(err) @@ -1794,7 +1787,6 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr config.Cloudfunctions2BasePath = d.Get("cloudfunctions2_custom_endpoint").(string) config.CloudIdentityBasePath = d.Get("cloud_identity_custom_endpoint").(string) config.CloudIdsBasePath = d.Get("cloud_ids_custom_endpoint").(string) - config.CloudIotBasePath = d.Get("cloud_iot_custom_endpoint").(string) config.CloudRunBasePath = d.Get("cloud_run_custom_endpoint").(string) config.CloudRunV2BasePath = d.Get("cloud_run_v2_custom_endpoint").(string) config.CloudSchedulerBasePath = d.Get("cloud_scheduler_custom_endpoint").(string) @@ -1823,7 +1815,6 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr config.EssentialContactsBasePath = d.Get("essential_contacts_custom_endpoint").(string) config.FilestoreBasePath = d.Get("filestore_custom_endpoint").(string) config.FirestoreBasePath = d.Get("firestore_custom_endpoint").(string) - config.GameServicesBasePath = d.Get("game_services_custom_endpoint").(string) config.GKEBackupBasePath = d.Get("gke_backup_custom_endpoint").(string) config.GKEHubBasePath = d.Get("gke_hub_custom_endpoint").(string) config.GKEHub2BasePath = d.Get("gke_hub2_custom_endpoint").(string) @@ -1896,23 +1887,6 @@ func ProviderConfigure(ctx context.Context, d *schema.ResourceData, p *schema.Pr return transport_tpg.ProviderDCLConfigure(d, &config), nil } -func ValidateCredentials(v interface{}, k string) (warnings []string, errors []error) { - if v == nil || v.(string) == "" { - return - } - creds := v.(string) - // if this is a path and we can stat it, assume it's ok - if _, err := os.Stat(creds); err == nil { - return - } - if _, err := googleoauth.CredentialsFromJSON(context.Background(), []byte(creds)); err != nil { - errors = append(errors, - fmt.Errorf("JSON credentials are not valid: %s", err)) - } - - return -} - func mergeResourceMaps(ms ...map[string]*schema.Resource) (map[string]*schema.Resource, error) { merged := make(map[string]*schema.Resource) duplicates := []string{} diff --git a/google/provider/provider_internal_test.go b/google/provider/provider_internal_test.go index 5f43794eea1..3dde62846fc 100644 --- a/google/provider/provider_internal_test.go +++ b/google/provider/provider_internal_test.go @@ -44,10 +44,13 @@ func TestProvider_ValidateCredentials(t *testing.T) { return string(contents) }, }, - "configuring credentials as an empty string is valid": { + "configuring credentials as an empty string is not valid": { ConfigValue: func(t *testing.T) interface{} { return "" }, + ExpectedErrors: []error{ + errors.New("expected a non-empty string"), + }, }, "leaving credentials unconfigured is valid": { ValueNotProvided: true, @@ -69,15 +72,65 @@ func TestProvider_ValidateCredentials(t *testing.T) { // Assert if len(ws) != len(tc.ExpectedWarnings) { - t.Errorf("Expected %d warnings, got %d: %v", len(tc.ExpectedWarnings), len(ws), ws) + t.Fatalf("Expected %d warnings, got %d: %v", len(tc.ExpectedWarnings), len(ws), ws) + } + if len(es) != len(tc.ExpectedErrors) { + t.Fatalf("Expected %d errors, got %d: %v", len(tc.ExpectedErrors), len(es), es) + } + + if len(tc.ExpectedErrors) > 0 && len(es) > 0 { + if es[0].Error() != tc.ExpectedErrors[0].Error() { + t.Fatalf("Expected first error to be \"%s\", got \"%s\"", tc.ExpectedErrors[0], es[0]) + } + } + }) + } +} + +func TestProvider_ValidateEmptyStrings(t *testing.T) { + cases := map[string]struct { + ConfigValue interface{} + ValueNotProvided bool + ExpectedWarnings []string + ExpectedErrors []error + }{ + "non-empty strings are valid": { + ConfigValue: "foobar", + }, + "unconfigured values are valid": { + ValueNotProvided: true, + }, + "empty strings are not valid": { + ConfigValue: "", + ExpectedErrors: []error{ + errors.New("expected a non-empty string"), + }, + }, + } + for tn, tc := range cases { + t.Run(tn, func(t *testing.T) { + + // Arrange + var configValue interface{} + if !tc.ValueNotProvided { + configValue = tc.ConfigValue + } + + // Act + // Note: second argument is currently unused by the function but is necessary to fulfill the SchemaValidateFunc type's function signature + ws, es := provider.ValidateEmptyStrings(configValue, "") + + // Assert + if len(ws) != len(tc.ExpectedWarnings) { + t.Fatalf("Expected %d warnings, got %d: %v", len(tc.ExpectedWarnings), len(ws), ws) } if len(es) != len(tc.ExpectedErrors) { - t.Errorf("Expected %d errors, got %d: %v", len(tc.ExpectedErrors), len(es), es) + t.Fatalf("Expected %d errors, got %d: %v", len(tc.ExpectedErrors), len(es), es) } - if len(tc.ExpectedErrors) > 0 { + if len(tc.ExpectedErrors) > 0 && len(es) > 0 { if es[0].Error() != tc.ExpectedErrors[0].Error() { - t.Errorf("Expected first error to be \"%s\", got \"%s\"", tc.ExpectedErrors[0], es[0]) + t.Fatalf("Expected first error to be \"%s\", got \"%s\"", tc.ExpectedErrors[0], es[0]) } } }) diff --git a/google/provider/provider_test.go b/google/provider/provider_test.go index 704c7c7f2f4..b7da8a4eaf1 100644 --- a/google/provider/provider_test.go +++ b/google/provider/provider_test.go @@ -295,6 +295,75 @@ func TestAccProviderCredentialsUnknownValue(t *testing.T) { }) } +func TestAccProviderEmptyStrings(t *testing.T) { + t.Parallel() + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + // No TestDestroy since that's not really the point of this test + Steps: []resource.TestStep{ + // When no values are set in the provider block there are no errors + // This test case is a control to show validation doesn't accidentally flag unset fields + // The "" argument is a lack of key = value being passed into the provider block + { + Config: testAccProvider_checkPlanTimeErrors("", acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + }, + // credentials as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`credentials = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + // access_token as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`access_token = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + // impersonate_service_account as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`impersonate_service_account = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + // project as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`project = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + // billing_project as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`billing_project = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + // region as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`region = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + // zone as an empty string causes a validation error + { + Config: testAccProvider_checkPlanTimeErrors(`zone = ""`, acctest.RandString(t, 10)), + PlanOnly: true, + ExpectNonEmptyPlan: true, + ExpectError: regexp.MustCompile(`expected a non-empty string`), + }, + }, + }) +} + func testAccProviderBasePath_setBasePath(endpoint, name string) string { return fmt.Sprintf(` provider "google" { @@ -542,3 +611,16 @@ resource "google_firebase_project" "this" { ] }`, credentials, pid, pid, pid, org, billing) } + +func testAccProvider_checkPlanTimeErrors(providerArgument, randString string) string { + return fmt.Sprintf(` +provider "google" { + %s +} + +# A random resource so that the test can generate a plan (can't check validation errors when plan is empty) +resource "google_pubsub_topic" "example" { + name = "tf-test-planned-resource-%s" +} +`, providerArgument, randString) +} diff --git a/google/provider/provider_validators.go b/google/provider/provider_validators.go new file mode 100644 index 00000000000..7c5dcc7ebd9 --- /dev/null +++ b/google/provider/provider_validators.go @@ -0,0 +1,49 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package provider + +import ( + "context" + "fmt" + "os" + + googleoauth "golang.org/x/oauth2/google" +) + +func ValidateCredentials(v interface{}, k string) (warnings []string, errors []error) { + if v == nil { + return + } + creds := v.(string) + + // reject empty strings + if v.(string) == "" { + errors = append(errors, + fmt.Errorf("expected a non-empty string")) + return + } + + // if this is a path and we can stat it, assume it's ok + if _, err := os.Stat(creds); err == nil { + return + } + if _, err := googleoauth.CredentialsFromJSON(context.Background(), []byte(creds)); err != nil { + errors = append(errors, + fmt.Errorf("JSON credentials are not valid: %s", err)) + } + + return +} + +func ValidateEmptyStrings(v interface{}, k string) (warnings []string, errors []error) { + if v == nil { + return + } + + if v.(string) == "" { + errors = append(errors, + fmt.Errorf("expected a non-empty string")) + } + + return +} diff --git a/google/services/accessapproval/data_source_access_approval_folder_service_account.go b/google/services/accessapproval/data_source_access_approval_folder_service_account.go index 33e3406d667..e281ee15964 100644 --- a/google/services/accessapproval/data_source_access_approval_folder_service_account.go +++ b/google/services/accessapproval/data_source_access_approval_folder_service_account.go @@ -58,7 +58,7 @@ func dataSourceAccessApprovalFolderServiceAccountRead(d *schema.ResourceData, me UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessApprovalFolderServiceAccount %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("AccessApprovalFolderServiceAccount %q", d.Id()), url) } if err := d.Set("name", res["name"]); err != nil { diff --git a/google/services/accessapproval/data_source_access_approval_organization_service_account.go b/google/services/accessapproval/data_source_access_approval_organization_service_account.go index 7d6011a9d1a..0c52d3c32ec 100644 --- a/google/services/accessapproval/data_source_access_approval_organization_service_account.go +++ b/google/services/accessapproval/data_source_access_approval_organization_service_account.go @@ -58,7 +58,7 @@ func dataSourceAccessApprovalOrganizationServiceAccountRead(d *schema.ResourceDa UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessApprovalOrganizationServiceAccount %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("AccessApprovalOrganizationServiceAccount %q", d.Id()), url) } if err := d.Set("name", res["name"]); err != nil { diff --git a/google/services/accessapproval/data_source_access_approval_project_service_account.go b/google/services/accessapproval/data_source_access_approval_project_service_account.go index 761f3ccc756..17fc0bc0b00 100644 --- a/google/services/accessapproval/data_source_access_approval_project_service_account.go +++ b/google/services/accessapproval/data_source_access_approval_project_service_account.go @@ -58,7 +58,7 @@ func dataSourceAccessApprovalProjectServiceAccountRead(d *schema.ResourceData, m UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("AccessApprovalProjectServiceAccount %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("AccessApprovalProjectServiceAccount %q", d.Id()), url) } if err := d.Set("name", res["name"]); err != nil { diff --git a/google/services/accessapproval/resource_folder_access_approval_settings.go b/google/services/accessapproval/resource_folder_access_approval_settings.go index 3aad595b38c..8e805031122 100644 --- a/google/services/accessapproval/resource_folder_access_approval_settings.go +++ b/google/services/accessapproval/resource_folder_access_approval_settings.go @@ -454,8 +454,8 @@ func resourceAccessApprovalFolderSettingsDelete(d *schema.ResourceData, meta int func resourceAccessApprovalFolderSettingsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "folders/(?P[^/]+)/accessApprovalSettings", - "(?P[^/]+)", + "^folders/(?P[^/]+)/accessApprovalSettings$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/accessapproval/resource_organization_access_approval_settings.go b/google/services/accessapproval/resource_organization_access_approval_settings.go index fa5f477051f..c98b4c8370a 100644 --- a/google/services/accessapproval/resource_organization_access_approval_settings.go +++ b/google/services/accessapproval/resource_organization_access_approval_settings.go @@ -414,8 +414,8 @@ func resourceAccessApprovalOrganizationSettingsDelete(d *schema.ResourceData, me func resourceAccessApprovalOrganizationSettingsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "organizations/(?P[^/]+)/accessApprovalSettings", - "(?P[^/]+)", + "^organizations/(?P[^/]+)/accessApprovalSettings$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/accessapproval/resource_project_access_approval_settings.go b/google/services/accessapproval/resource_project_access_approval_settings.go index a5f644fa2cc..35a47a06e57 100644 --- a/google/services/accessapproval/resource_project_access_approval_settings.go +++ b/google/services/accessapproval/resource_project_access_approval_settings.go @@ -445,8 +445,8 @@ func resourceAccessApprovalProjectSettingsDelete(d *schema.ResourceData, meta in func resourceAccessApprovalProjectSettingsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/accessApprovalSettings", - "(?P[^/]+)", + "^projects/(?P[^/]+)/accessApprovalSettings$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/accesscontextmanager/resource_access_context_manager_access_policy.go b/google/services/accesscontextmanager/resource_access_context_manager_access_policy.go index 46ba93d541a..60b60fff2cd 100644 --- a/google/services/accesscontextmanager/resource_access_context_manager_access_policy.go +++ b/google/services/accesscontextmanager/resource_access_context_manager_access_policy.go @@ -370,7 +370,7 @@ func resourceAccessContextManagerAccessPolicyDelete(d *schema.ResourceData, meta func resourceAccessContextManagerAccessPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go b/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go index 395dd70a61a..2a59d1e68e8 100644 --- a/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go +++ b/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter.go @@ -109,7 +109,7 @@ the 'useExplicitDryRunSpec' flag is set.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "access_levels": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of AccessLevel resource names that allow resources within the ServicePerimeter to be accessed from the internet. @@ -124,6 +124,7 @@ Format: accessPolicies/{policy_id}/accessLevels/{access_level_name}`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, AtLeastOneOf: []string{"spec.0.resources", "spec.0.access_levels", "spec.0.restricted_services"}, }, "egress_policies": { @@ -143,7 +144,7 @@ a perimeter bridge.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "identities": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of identities that are allowed access through this 'EgressPolicy'. Should be in the format of email address. The email address should @@ -151,6 +152,7 @@ represent individual user or service account only.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "identity_type": { Type: schema.TypeString, @@ -172,7 +174,7 @@ cause this 'EgressPolicy' to apply.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "external_resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of external resources that are allowed to be accessed. A request matches if it contains an external resource in this list (Example: @@ -180,6 +182,7 @@ s3://bucket/path). Currently '*' is not allowed.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "operations": { Type: schema.TypeList, @@ -224,7 +227,7 @@ field set to '*' will allow all methods AND permissions for all services.`, }, }, "resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of resources, currently only projects in the form 'projects/', that match this to stanza. A request matches @@ -234,6 +237,7 @@ the perimeter.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, }, }, @@ -259,7 +263,7 @@ to apply.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "identities": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of identities that are allowed access through this ingress policy. Should be in the format of email address. The email address should represent @@ -267,6 +271,7 @@ individual user or service account only.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "identity_type": { Type: schema.TypeString, @@ -361,7 +366,7 @@ field set to '*' will allow all methods AND permissions for all services.`, }, }, "resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of resources, currently only projects in the form 'projects/', protected by this 'ServicePerimeter' @@ -374,6 +379,7 @@ also matches the 'operations' field.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, }, }, @@ -382,7 +388,7 @@ also matches the 'operations' field.`, }, }, "resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of GCP resources that are inside of the service perimeter. Currently only projects are allowed. @@ -390,10 +396,11 @@ Format: projects/{project_number}`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, AtLeastOneOf: []string{"spec.0.resources", "spec.0.access_levels", "spec.0.restricted_services"}, }, "restricted_services": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `GCP services that are subject to the Service Perimeter restrictions. Must contain a list of services. For example, if @@ -403,6 +410,7 @@ restrictions.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, AtLeastOneOf: []string{"spec.0.resources", "spec.0.access_levels", "spec.0.restricted_services"}, }, "vpc_accessible_services": { @@ -414,13 +422,14 @@ Perimeter.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "allowed_services": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `The list of APIs usable within the Service Perimeter. Must be empty unless 'enableRestriction' is True.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "enable_restriction": { Type: schema.TypeBool, @@ -444,7 +453,7 @@ perimeter content and boundaries.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "access_levels": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of AccessLevel resource names that allow resources within the ServicePerimeter to be accessed from the internet. @@ -459,6 +468,7 @@ Format: accessPolicies/{policy_id}/accessLevels/{access_level_name}`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, AtLeastOneOf: []string{"status.0.resources", "status.0.access_levels", "status.0.restricted_services"}, }, "egress_policies": { @@ -478,7 +488,7 @@ a perimeter bridge.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "identities": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of identities that are allowed access through this 'EgressPolicy'. Should be in the format of email address. The email address should @@ -486,6 +496,7 @@ represent individual user or service account only.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "identity_type": { Type: schema.TypeString, @@ -507,7 +518,7 @@ cause this 'EgressPolicy' to apply.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "external_resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of external resources that are allowed to be accessed. A request matches if it contains an external resource in this list (Example: @@ -515,6 +526,7 @@ s3://bucket/path). Currently '*' is not allowed.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "operations": { Type: schema.TypeList, @@ -559,7 +571,7 @@ field set to '*' will allow all methods AND permissions for all services.`, }, }, "resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of resources, currently only projects in the form 'projects/', that match this to stanza. A request matches @@ -569,6 +581,7 @@ the perimeter.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, }, }, @@ -594,7 +607,7 @@ to apply.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "identities": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of identities that are allowed access through this ingress policy. Should be in the format of email address. The email address should represent @@ -602,6 +615,7 @@ individual user or service account only.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, "identity_type": { Type: schema.TypeString, @@ -696,7 +710,7 @@ field set to '*' will allow all methods AND permissions for all services.`, }, }, "resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of resources, currently only projects in the form 'projects/', protected by this 'ServicePerimeter' @@ -709,6 +723,7 @@ also matches the 'operations' field.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, }, }, @@ -717,7 +732,7 @@ also matches the 'operations' field.`, }, }, "resources": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `A list of GCP resources that are inside of the service perimeter. Currently only projects are allowed. @@ -725,6 +740,7 @@ Format: projects/{project_number}`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, AtLeastOneOf: []string{"status.0.resources", "status.0.access_levels", "status.0.restricted_services"}, }, "restricted_services": { @@ -1229,11 +1245,17 @@ func flattenAccessContextManagerServicePerimeterStatus(v interface{}, d *schema. return []interface{}{transformed} } func flattenAccessContextManagerServicePerimeterStatusResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusAccessLevels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusRestrictedServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1310,7 +1332,10 @@ func flattenAccessContextManagerServicePerimeterStatusIngressPoliciesIngressFrom } func flattenAccessContextManagerServicePerimeterStatusIngressPoliciesIngressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusIngressPoliciesIngressFromSources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1356,7 +1381,10 @@ func flattenAccessContextManagerServicePerimeterStatusIngressPoliciesIngressTo(v return []interface{}{transformed} } func flattenAccessContextManagerServicePerimeterStatusIngressPoliciesIngressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusIngressPoliciesIngressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1448,7 +1476,10 @@ func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressFromId } func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressTo(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1469,11 +1500,17 @@ func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressTo(v i return []interface{}{transformed} } func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressToExternalResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterStatusEgressPoliciesEgressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1550,15 +1587,24 @@ func flattenAccessContextManagerServicePerimeterSpec(v interface{}, d *schema.Re return []interface{}{transformed} } func flattenAccessContextManagerServicePerimeterSpecResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecAccessLevels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecRestrictedServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecVpcAccessibleServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1581,7 +1627,10 @@ func flattenAccessContextManagerServicePerimeterSpecVpcAccessibleServicesEnableR } func flattenAccessContextManagerServicePerimeterSpecVpcAccessibleServicesAllowedServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecIngressPolicies(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1625,7 +1674,10 @@ func flattenAccessContextManagerServicePerimeterSpecIngressPoliciesIngressFromId } func flattenAccessContextManagerServicePerimeterSpecIngressPoliciesIngressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecIngressPoliciesIngressFromSources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1671,7 +1723,10 @@ func flattenAccessContextManagerServicePerimeterSpecIngressPoliciesIngressTo(v i return []interface{}{transformed} } func flattenAccessContextManagerServicePerimeterSpecIngressPoliciesIngressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecIngressPoliciesIngressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1763,7 +1818,10 @@ func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressFromIden } func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressTo(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1784,11 +1842,17 @@ func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressTo(v int return []interface{}{transformed} } func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressToExternalResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimeterSpecEgressPoliciesEgressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1916,10 +1980,12 @@ func expandAccessContextManagerServicePerimeterStatus(v interface{}, d tpgresour } func expandAccessContextManagerServicePerimeterStatusResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimeterStatusAccessLevels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2030,6 +2096,7 @@ func expandAccessContextManagerServicePerimeterStatusIngressPoliciesIngressFromI } func expandAccessContextManagerServicePerimeterStatusIngressPoliciesIngressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2097,6 +2164,7 @@ func expandAccessContextManagerServicePerimeterStatusIngressPoliciesIngressTo(v } func expandAccessContextManagerServicePerimeterStatusIngressPoliciesIngressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2230,6 +2298,7 @@ func expandAccessContextManagerServicePerimeterStatusEgressPoliciesEgressFromIde } func expandAccessContextManagerServicePerimeterStatusEgressPoliciesEgressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2267,10 +2336,12 @@ func expandAccessContextManagerServicePerimeterStatusEgressPoliciesEgressTo(v in } func expandAccessContextManagerServicePerimeterStatusEgressPoliciesEgressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimeterStatusEgressPoliciesEgressToExternalResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2399,14 +2470,17 @@ func expandAccessContextManagerServicePerimeterSpec(v interface{}, d tpgresource } func expandAccessContextManagerServicePerimeterSpecResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimeterSpecAccessLevels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimeterSpecRestrictedServices(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2441,6 +2515,7 @@ func expandAccessContextManagerServicePerimeterSpecVpcAccessibleServicesEnableRe } func expandAccessContextManagerServicePerimeterSpecVpcAccessibleServicesAllowedServices(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2511,6 +2586,7 @@ func expandAccessContextManagerServicePerimeterSpecIngressPoliciesIngressFromIde } func expandAccessContextManagerServicePerimeterSpecIngressPoliciesIngressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2578,6 +2654,7 @@ func expandAccessContextManagerServicePerimeterSpecIngressPoliciesIngressTo(v in } func expandAccessContextManagerServicePerimeterSpecIngressPoliciesIngressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2711,6 +2788,7 @@ func expandAccessContextManagerServicePerimeterSpecEgressPoliciesEgressFromIdent } func expandAccessContextManagerServicePerimeterSpecEgressPoliciesEgressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2748,10 +2826,12 @@ func expandAccessContextManagerServicePerimeterSpecEgressPoliciesEgressTo(v inte } func expandAccessContextManagerServicePerimeterSpecEgressPoliciesEgressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimeterSpecEgressPoliciesEgressToExternalResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } diff --git a/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter_test.go b/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter_test.go index 6b3b7fad2e8..b33252b7d5d 100644 --- a/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter_test.go +++ b/google/services/accesscontextmanager/resource_access_context_manager_service_perimeter_test.go @@ -208,6 +208,83 @@ resource "google_access_context_manager_service_perimeter" "test-access" { name = "accessPolicies/${google_access_context_manager_access_policy.test-access.name}/servicePerimeters/%s" title = "%s" perimeter_type = "PERIMETER_TYPE_REGULAR" + use_explicit_dry_run_spec = true + spec { + restricted_services = ["bigquery.googleapis.com", "storage.googleapis.com"] + access_levels = [google_access_context_manager_access_level.test-access.name] + + vpc_accessible_services { + enable_restriction = true + allowed_services = ["bigquery.googleapis.com", "storage.googleapis.com"] + } + + ingress_policies { + ingress_from { + sources { + access_level = google_access_context_manager_access_level.test-access.name + } + identity_type = "ANY_IDENTITY" + } + + ingress_to { + resources = [ "*" ] + operations { + service_name = "bigquery.googleapis.com" + + method_selectors { + method = "BigQueryStorage.ReadRows" + } + + method_selectors { + method = "TableService.ListTables" + } + + method_selectors { + permission = "bigquery.jobs.get" + } + } + + operations { + service_name = "storage.googleapis.com" + + method_selectors { + method = "google.storage.objects.create" + } + } + } + } + ingress_policies { + ingress_from { + identities = ["user:test@google.com"] + } + ingress_to { + resources = ["*"] + } + } + + egress_policies { + egress_from { + identity_type = "ANY_USER_ACCOUNT" + } + egress_to { + operations { + service_name = "bigquery.googleapis.com" + method_selectors { + permission = "externalResource.read" + } + } + external_resources = ["s3://bucket1"] + } + } + egress_policies { + egress_from { + identities = ["user:test@google.com"] + } + egress_to { + resources = ["*"] + } + } + } status { restricted_services = ["bigquery.googleapis.com", "storage.googleapis.com"] access_levels = [google_access_context_manager_access_level.test-access.name] @@ -252,11 +329,36 @@ resource "google_access_context_manager_service_perimeter" "test-access" { } } } + ingress_policies { + ingress_from { + identities = ["user:test@google.com"] + } + ingress_to { + resources = ["*"] + } + } egress_policies { egress_from { identity_type = "ANY_USER_ACCOUNT" } + egress_to { + operations { + service_name = "bigquery.googleapis.com" + method_selectors { + permission = "externalResource.read" + } + } + external_resources = ["s3://bucket1"] + } + } + egress_policies { + egress_from { + identities = ["user:test@google.com"] + } + egress_to { + resources = ["*"] + } } } } diff --git a/google/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go b/google/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go index 6daad1c47bf..9b1eec05069 100644 --- a/google/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go +++ b/google/services/accesscontextmanager/resource_access_context_manager_service_perimeters.go @@ -56,45 +56,36 @@ func ResourceAccessContextManagerServicePerimeters() *schema.Resource { Format: accessPolicies/{policy_id}`, }, "service_perimeters": { - Type: schema.TypeSet, + Type: schema.TypeList, Optional: true, Description: `The desired Service Perimeters that should replace all existing Service Perimeters in the Access Policy.`, - Elem: accesscontextmanagerServicePerimetersServicePerimetersSchema(), - // Default schema.HashSchema is used. - }, - }, - UseJSONNumber: true, - } -} - -func accesscontextmanagerServicePerimetersServicePerimetersSchema() *schema.Resource { - return &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `Resource name for the ServicePerimeter. The short_name component must + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Resource name for the ServicePerimeter. The short_name component must begin with a letter and only include alphanumeric and '_'. Format: accessPolicies/{policy_id}/servicePerimeters/{short_name}`, - }, - "title": { - Type: schema.TypeString, - Required: true, - Description: `Human readable title. Must be unique within the Policy.`, - }, - "description": { - Type: schema.TypeString, - Optional: true, - Description: `Description of the ServicePerimeter and its use. Does not affect + }, + "title": { + Type: schema.TypeString, + Required: true, + Description: `Human readable title. Must be unique within the Policy.`, + }, + "description": { + Type: schema.TypeString, + Optional: true, + Description: `Description of the ServicePerimeter and its use. Does not affect behavior.`, - }, - "perimeter_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"PERIMETER_TYPE_REGULAR", "PERIMETER_TYPE_BRIDGE", ""}), - Description: `Specifies the type of the Perimeter. There are two types: regular and + }, + "perimeter_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"PERIMETER_TYPE_REGULAR", "PERIMETER_TYPE_BRIDGE", ""}), + Description: `Specifies the type of the Perimeter. There are two types: regular and bridge. Regular Service Perimeter contains resources, access levels, and restricted services. Every resource can be in at most ONE regular Service Perimeter. @@ -110,22 +101,22 @@ Perimeter Bridges are typically useful when building more complex topologies with many independent perimeters that need to share some data with a common perimeter, but should not be able to share data among themselves. Default value: "PERIMETER_TYPE_REGULAR" Possible values: ["PERIMETER_TYPE_REGULAR", "PERIMETER_TYPE_BRIDGE"]`, - Default: "PERIMETER_TYPE_REGULAR", - }, - "spec": { - Type: schema.TypeList, - Optional: true, - Description: `Proposed (or dry run) ServicePerimeter configuration. + Default: "PERIMETER_TYPE_REGULAR", + }, + "spec": { + Type: schema.TypeList, + Optional: true, + Description: `Proposed (or dry run) ServicePerimeter configuration. This configuration allows to specify and test ServicePerimeter configuration without enforcing actual access restrictions. Only allowed to be set when the 'useExplicitDryRunSpec' flag is set.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "access_levels": { - Type: schema.TypeList, - Optional: true, - Description: `A list of AccessLevel resource names that allow resources within + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "access_levels": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of AccessLevel resource names that allow resources within the ServicePerimeter to be accessed from the internet. AccessLevels listed must be in the same policy as this ServicePerimeter. Referencing a nonexistent AccessLevel is a @@ -135,170 +126,175 @@ origins within the perimeter. For Service Perimeter Bridge, must be empty. Format: accessPolicies/{policy_id}/accessLevels/{access_level_name}`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "egress_policies": { - Type: schema.TypeList, - Optional: true, - Description: `List of EgressPolicies to apply to the perimeter. A perimeter may + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "egress_policies": { + Type: schema.TypeList, + Optional: true, + Description: `List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "egress_from": { - Type: schema.TypeList, - Optional: true, - Description: `Defines conditions on the source of a request causing this 'EgressPolicy' to apply.`, - MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "identities": { - Type: schema.TypeList, - Optional: true, - Description: `A list of identities that are allowed access through this 'EgressPolicy'. + "egress_from": { + Type: schema.TypeList, + Optional: true, + Description: `Defines conditions on the source of a request causing this 'EgressPolicy' to apply.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identities": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of identities that are allowed access through this 'EgressPolicy'. Should be in the format of email address. The email address should represent individual user or service account only.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "identity_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), - Description: `Specifies the type of identities that are allowed access to outside the + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "identity_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), + Description: `Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of 'identities' field will be allowed access. Possible values: ["IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT"]`, - }, - }, - }, - }, - "egress_to": { - Type: schema.TypeList, - Optional: true, - Description: `Defines the conditions on the 'ApiOperation' and destination resources that -cause this 'EgressPolicy' to apply.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "external_resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of external resources that are allowed to be accessed. A request -matches if it contains an external resource in this list (Example: -s3://bucket/path). Currently '*' is not allowed.`, - Elem: &schema.Schema{ - Type: schema.TypeString, + }, + }, }, }, - "operations": { + "egress_to": { Type: schema.TypeList, Optional: true, - Description: `A list of 'ApiOperations' that this egress rule applies to. A request matches -if it contains an operation/service in this list.`, + Description: `Defines the conditions on the 'ApiOperation' and destination resources that +cause this 'EgressPolicy' to apply.`, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "method_selectors": { + "external_resources": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of external resources that are allowed to be accessed. A request +matches if it contains an external resource in this list (Example: +s3://bucket/path). Currently '*' is not allowed.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "operations": { Type: schema.TypeList, Optional: true, - Description: `API methods or permissions to allow. Method or permission must belong -to the service specified by 'serviceName' field. A single MethodSelector -entry with '*' specified for the 'method' field will allow all methods -AND permissions for the service specified in 'serviceName'.`, + Description: `A list of 'ApiOperations' that this egress rule applies to. A request matches +if it contains an operation/service in this list.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "method": { - Type: schema.TypeString, + "method_selectors": { + Type: schema.TypeList, Optional: true, - Description: `Value for 'method' should be a valid method name for the corresponding + Description: `API methods or permissions to allow. Method or permission must belong +to the service specified by 'serviceName' field. A single MethodSelector +entry with '*' specified for the 'method' field will allow all methods +AND permissions for the service specified in 'serviceName'.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "method": { + Type: schema.TypeString, + Optional: true, + Description: `Value for 'method' should be a valid method name for the corresponding 'serviceName' in 'ApiOperation'. If '*' used as value for method, then ALL methods and permissions are allowed.`, + }, + "permission": { + Type: schema.TypeString, + Optional: true, + Description: `Value for permission should be a valid Cloud IAM permission for the +corresponding 'serviceName' in 'ApiOperation'.`, + }, + }, + }, }, - "permission": { + "service_name": { Type: schema.TypeString, Optional: true, - Description: `Value for permission should be a valid Cloud IAM permission for the -corresponding 'serviceName' in 'ApiOperation'.`, + Description: `The name of the API whose methods or permissions the 'IngressPolicy' or +'EgressPolicy' want to allow. A single 'ApiOperation' with serviceName +field set to '*' will allow all methods AND permissions for all services.`, }, }, }, }, - "service_name": { - Type: schema.TypeString, + "resources": { + Type: schema.TypeSet, Optional: true, - Description: `The name of the API whose methods or permissions the 'IngressPolicy' or -'EgressPolicy' want to allow. A single 'ApiOperation' with serviceName -field set to '*' will allow all methods AND permissions for all services.`, - }, - }, - }, - }, - "resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of resources, currently only projects in the form + Description: `A list of resources, currently only projects in the form 'projects/', that match this to stanza. A request matches if it contains a resource in this list. If * is specified for resources, then this 'EgressTo' rule will authorize access to all resources outside the perimeter.`, - Elem: &schema.Schema{ - Type: schema.TypeString, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + }, }, }, }, }, }, - }, - }, - }, - "ingress_policies": { - Type: schema.TypeList, - Optional: true, - Description: `List of 'IngressPolicies' to apply to the perimeter. A perimeter may + "ingress_policies": { + Type: schema.TypeList, + Optional: true, + Description: `List of 'IngressPolicies' to apply to the perimeter. A perimeter may have multiple 'IngressPolicies', each of which is evaluated separately. Access is granted if any 'Ingress Policy' grants it. Must be empty for a perimeter bridge.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "ingress_from": { - Type: schema.TypeList, - Optional: true, - Description: `Defines the conditions on the source of a request causing this 'IngressPolicy' -to apply.`, - MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "identities": { + "ingress_from": { Type: schema.TypeList, Optional: true, - Description: `A list of identities that are allowed access through this ingress policy. + Description: `Defines the conditions on the source of a request causing this 'IngressPolicy' +to apply.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identities": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of identities that are allowed access through this ingress policy. Should be in the format of email address. The email address should represent individual user or service account only.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "identity_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), - Description: `Specifies the type of identities that are allowed access from outside the + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "identity_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), + Description: `Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of 'identities' field will be allowed access. Possible values: ["IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT"]`, - }, - "sources": { - Type: schema.TypeList, - Optional: true, - Description: `Sources that this 'IngressPolicy' authorizes access from.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "access_level": { - Type: schema.TypeString, - Optional: true, - Description: `An 'AccessLevel' resource name that allow resources within the + }, + "sources": { + Type: schema.TypeList, + Optional: true, + Description: `Sources that this 'IngressPolicy' authorizes access from.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "access_level": { + Type: schema.TypeString, + Optional: true, + Description: `An 'AccessLevel' resource name that allow resources within the 'ServicePerimeters' to be accessed from the internet. 'AccessLevels' listed must be in the same policy as this 'ServicePerimeter'. Referencing a nonexistent 'AccessLevel' will cause an error. If no 'AccessLevel' names are listed, @@ -306,77 +302,77 @@ resources within the perimeter can only be accessed via Google Cloud calls with request origins within the perimeter. Example 'accessPolicies/MY_POLICY/accessLevels/MY_LEVEL.' If * is specified, then all IngressSources will be allowed.`, - }, - "resource": { - Type: schema.TypeString, - Optional: true, - Description: `A Google Cloud resource that is allowed to ingress the perimeter. + }, + "resource": { + Type: schema.TypeString, + Optional: true, + Description: `A Google Cloud resource that is allowed to ingress the perimeter. Requests from these resources will be allowed to access perimeter data. Currently only projects are allowed. Format 'projects/{project_number}' The project may be in any Google Cloud organization, not just the organization that the perimeter is defined in. '*' is not allowed, the case of allowing all Google Cloud resources only is not supported.`, + }, + }, + }, }, }, }, }, - }, - }, - }, - "ingress_to": { - Type: schema.TypeList, - Optional: true, - Description: `Defines the conditions on the 'ApiOperation' and request destination that cause -this 'IngressPolicy' to apply.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "operations": { + "ingress_to": { Type: schema.TypeList, Optional: true, - Description: `A list of 'ApiOperations' the sources specified in corresponding 'IngressFrom' -are allowed to perform in this 'ServicePerimeter'.`, + Description: `Defines the conditions on the 'ApiOperation' and request destination that cause +this 'IngressPolicy' to apply.`, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "method_selectors": { + "operations": { Type: schema.TypeList, Optional: true, - Description: `API methods or permissions to allow. Method or permission must belong to -the service specified by serviceName field. A single 'MethodSelector' entry -with '*' specified for the method field will allow all methods AND -permissions for the service specified in 'serviceName'.`, + Description: `A list of 'ApiOperations' the sources specified in corresponding 'IngressFrom' +are allowed to perform in this 'ServicePerimeter'.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "method": { - Type: schema.TypeString, + "method_selectors": { + Type: schema.TypeList, Optional: true, - Description: `Value for method should be a valid method name for the corresponding + Description: `API methods or permissions to allow. Method or permission must belong to +the service specified by serviceName field. A single 'MethodSelector' entry +with '*' specified for the method field will allow all methods AND +permissions for the service specified in 'serviceName'.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "method": { + Type: schema.TypeString, + Optional: true, + Description: `Value for method should be a valid method name for the corresponding serviceName in 'ApiOperation'. If '*' used as value for 'method', then ALL methods and permissions are allowed.`, + }, + "permission": { + Type: schema.TypeString, + Optional: true, + Description: `Value for permission should be a valid Cloud IAM permission for the +corresponding 'serviceName' in 'ApiOperation'.`, + }, + }, + }, }, - "permission": { + "service_name": { Type: schema.TypeString, Optional: true, - Description: `Value for permission should be a valid Cloud IAM permission for the -corresponding 'serviceName' in 'ApiOperation'.`, + Description: `The name of the API whose methods or permissions the 'IngressPolicy' or +'EgressPolicy' want to allow. A single 'ApiOperation' with 'serviceName' +field set to '*' will allow all methods AND permissions for all services.`, }, }, }, }, - "service_name": { - Type: schema.TypeString, + "resources": { + Type: schema.TypeSet, Optional: true, - Description: `The name of the API whose methods or permissions the 'IngressPolicy' or -'EgressPolicy' want to allow. A single 'ApiOperation' with 'serviceName' -field set to '*' will allow all methods AND permissions for all services.`, - }, - }, - }, - }, - "resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of resources, currently only projects in the form + Description: `A list of resources, currently only projects in the form 'projects/', protected by this 'ServicePerimeter' that are allowed to be accessed by sources defined in the corresponding 'IngressFrom'. A request matches if it contains @@ -384,80 +380,84 @@ a resource in this list. If '*' is specified for resources, then this 'IngressTo' rule will authorize access to all resources inside the perimeter, provided that the request also matches the 'operations' field.`, - Elem: &schema.Schema{ - Type: schema.TypeString, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + }, }, }, }, }, }, - }, - }, - }, - "resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of GCP resources that are inside of the service perimeter. + "resources": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of GCP resources that are inside of the service perimeter. Currently only projects are allowed. Format: projects/{project_number}`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "restricted_services": { - Type: schema.TypeList, - Optional: true, - Description: `GCP services that are subject to the Service Perimeter + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "restricted_services": { + Type: schema.TypeSet, + Optional: true, + Description: `GCP services that are subject to the Service Perimeter restrictions. Must contain a list of services. For example, if 'storage.googleapis.com' is specified, access to the storage buckets inside the perimeter must meet the perimeter's access restrictions.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "vpc_accessible_services": { - Type: schema.TypeList, - Optional: true, - Description: `Specifies how APIs are allowed to communicate within the Service -Perimeter.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "allowed_services": { - Type: schema.TypeList, - Optional: true, - Description: `The list of APIs usable within the Service Perimeter. -Must be empty unless 'enableRestriction' is True.`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, - "enable_restriction": { - Type: schema.TypeBool, + "vpc_accessible_services": { + Type: schema.TypeList, Optional: true, - Description: `Whether to restrict API calls within the Service Perimeter to the + Description: `Specifies how APIs are allowed to communicate within the Service +Perimeter.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allowed_services": { + Type: schema.TypeSet, + Optional: true, + Description: `The list of APIs usable within the Service Perimeter. +Must be empty unless 'enableRestriction' is True.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "enable_restriction": { + Type: schema.TypeBool, + Optional: true, + Description: `Whether to restrict API calls within the Service Perimeter to the list of APIs specified in 'allowedServices'.`, + }, + }, + }, }, }, }, }, - }, - }, - }, - "status": { - Type: schema.TypeList, - Optional: true, - Description: `ServicePerimeter configuration. Specifies sets of resources, -restricted services and access levels that determine -perimeter content and boundaries.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "access_levels": { + "status": { Type: schema.TypeList, Optional: true, - Description: `A list of AccessLevel resource names that allow resources within + Description: `ServicePerimeter configuration. Specifies sets of resources, +restricted services and access levels that determine +perimeter content and boundaries.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "access_levels": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of AccessLevel resource names that allow resources within the ServicePerimeter to be accessed from the internet. AccessLevels listed must be in the same policy as this ServicePerimeter. Referencing a nonexistent AccessLevel is a @@ -467,170 +467,264 @@ origins within the perimeter. For Service Perimeter Bridge, must be empty. Format: accessPolicies/{policy_id}/accessLevels/{access_level_name}`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "egress_policies": { - Type: schema.TypeList, - Optional: true, - Description: `List of EgressPolicies to apply to the perimeter. A perimeter may + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "egress_policies": { + Type: schema.TypeList, + Optional: true, + Description: `List of EgressPolicies to apply to the perimeter. A perimeter may have multiple EgressPolicies, each of which is evaluated separately. Access is granted if any EgressPolicy grants it. Must be empty for a perimeter bridge.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "egress_from": { - Type: schema.TypeList, - Optional: true, - Description: `Defines conditions on the source of a request causing this 'EgressPolicy' to apply.`, - MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "identities": { - Type: schema.TypeList, - Optional: true, - Description: `A list of identities that are allowed access through this 'EgressPolicy'. + "egress_from": { + Type: schema.TypeList, + Optional: true, + Description: `Defines conditions on the source of a request causing this 'EgressPolicy' to apply.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identities": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of identities that are allowed access through this 'EgressPolicy'. Should be in the format of email address. The email address should represent individual user or service account only.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "identity_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), - Description: `Specifies the type of identities that are allowed access to outside the + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "identity_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), + Description: `Specifies the type of identities that are allowed access to outside the perimeter. If left unspecified, then members of 'identities' field will be allowed access. Possible values: ["IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT"]`, - }, - }, - }, - }, - "egress_to": { - Type: schema.TypeList, - Optional: true, - Description: `Defines the conditions on the 'ApiOperation' and destination resources that -cause this 'EgressPolicy' to apply.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "external_resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of external resources that are allowed to be accessed. A request -matches if it contains an external resource in this list (Example: -s3://bucket/path). Currently '*' is not allowed.`, - Elem: &schema.Schema{ - Type: schema.TypeString, + }, + }, }, }, - "operations": { + "egress_to": { Type: schema.TypeList, Optional: true, - Description: `A list of 'ApiOperations' that this egress rule applies to. A request matches -if it contains an operation/service in this list.`, + Description: `Defines the conditions on the 'ApiOperation' and destination resources that +cause this 'EgressPolicy' to apply.`, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "method_selectors": { + "external_resources": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of external resources that are allowed to be accessed. A request +matches if it contains an external resource in this list (Example: +s3://bucket/path). Currently '*' is not allowed.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "operations": { Type: schema.TypeList, Optional: true, - Description: `API methods or permissions to allow. Method or permission must belong -to the service specified by 'serviceName' field. A single MethodSelector -entry with '*' specified for the 'method' field will allow all methods -AND permissions for the service specified in 'serviceName'.`, + Description: `A list of 'ApiOperations' that this egress rule applies to. A request matches +if it contains an operation/service in this list.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "method": { - Type: schema.TypeString, + "method_selectors": { + Type: schema.TypeList, Optional: true, - Description: `Value for 'method' should be a valid method name for the corresponding + Description: `API methods or permissions to allow. Method or permission must belong +to the service specified by 'serviceName' field. A single MethodSelector +entry with '*' specified for the 'method' field will allow all methods +AND permissions for the service specified in 'serviceName'.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "method": { + Type: schema.TypeString, + Optional: true, + Description: `Value for 'method' should be a valid method name for the corresponding 'serviceName' in 'ApiOperation'. If '*' used as value for method, then ALL methods and permissions are allowed.`, + }, + "permission": { + Type: schema.TypeString, + Optional: true, + Description: `Value for permission should be a valid Cloud IAM permission for the +corresponding 'serviceName' in 'ApiOperation'.`, + }, + }, + }, }, - "permission": { + "service_name": { Type: schema.TypeString, Optional: true, - Description: `Value for permission should be a valid Cloud IAM permission for the -corresponding 'serviceName' in 'ApiOperation'.`, + Description: `The name of the API whose methods or permissions the 'IngressPolicy' or +'EgressPolicy' want to allow. A single 'ApiOperation' with serviceName +field set to '*' will allow all methods AND permissions for all services.`, }, }, }, }, - "service_name": { - Type: schema.TypeString, + "resources": { + Type: schema.TypeSet, Optional: true, - Description: `The name of the API whose methods or permissions the 'IngressPolicy' or -'EgressPolicy' want to allow. A single 'ApiOperation' with serviceName -field set to '*' will allow all methods AND permissions for all services.`, - }, - }, - }, - }, - "resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of resources, currently only projects in the form + Description: `A list of resources, currently only projects in the form 'projects/', that match this to stanza. A request matches if it contains a resource in this list. If * is specified for resources, then this 'EgressTo' rule will authorize access to all resources outside the perimeter.`, - Elem: &schema.Schema{ - Type: schema.TypeString, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + }, }, }, }, }, }, - }, - }, - }, - "ingress_policies": { - Type: schema.TypeList, - Optional: true, - Description: `List of 'IngressPolicies' to apply to the perimeter. A perimeter may + "ingress_policies": { + Type: schema.TypeSet, + Optional: true, + Description: `List of 'IngressPolicies' to apply to the perimeter. A perimeter may have multiple 'IngressPolicies', each of which is evaluated separately. Access is granted if any 'Ingress Policy' grants it. Must be empty for a perimeter bridge.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "ingress_from": { + Elem: accesscontextmanagerServicePerimetersServicePerimetersServicePerimetersStatusIngressPoliciesSchema(), + // Default schema.HashSchema is used. + }, + "resources": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of GCP resources that are inside of the service perimeter. +Currently only projects are allowed. +Format: projects/{project_number}`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "restricted_services": { + Type: schema.TypeSet, + Optional: true, + Description: `GCP services that are subject to the Service Perimeter +restrictions. Must contain a list of services. For example, if +'storage.googleapis.com' is specified, access to the storage +buckets inside the perimeter must meet the perimeter's access +restrictions.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "vpc_accessible_services": { Type: schema.TypeList, Optional: true, - Description: `Defines the conditions on the source of a request causing this 'IngressPolicy' -to apply.`, + Description: `Specifies how APIs are allowed to communicate within the Service +Perimeter.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "identities": { - Type: schema.TypeList, + "allowed_services": { + Type: schema.TypeSet, + Optional: true, + Description: `The list of APIs usable within the Service Perimeter. +Must be empty unless 'enableRestriction' is True.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "enable_restriction": { + Type: schema.TypeBool, Optional: true, - Description: `A list of identities that are allowed access through this ingress policy. + Description: `Whether to restrict API calls within the Service Perimeter to the +list of APIs specified in 'allowedServices'.`, + }, + }, + }, + }, + }, + }, + }, + "use_explicit_dry_run_spec": { + Type: schema.TypeBool, + Optional: true, + Description: `Use explicit dry run spec flag. Ordinarily, a dry-run spec implicitly exists +for all Service Perimeters, and that spec is identical to the status for those +Service Perimeters. When this flag is set, it inhibits the generation of the +implicit spec, thereby allowing the user to explicitly provide a +configuration ("spec") to use in a dry-run version of the Service Perimeter. +This allows the user to test changes to the enforced config ("status") without +actually enforcing them. This testing is done through analyzing the differences +between currently enforced and suggested restrictions. useExplicitDryRunSpec must +bet set to True if any of the fields in the spec are set to non-default values.`, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + Description: `Time the AccessPolicy was created in UTC.`, + }, + "update_time": { + Type: schema.TypeString, + Computed: true, + Description: `Time the AccessPolicy was updated in UTC.`, + }, + }, + }, + }, + }, + UseJSONNumber: true, + } +} + +func accesscontextmanagerServicePerimetersServicePerimetersServicePerimetersStatusIngressPoliciesSchema() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ingress_from": { + Type: schema.TypeList, + Optional: true, + Description: `Defines the conditions on the source of a request causing this 'IngressPolicy' +to apply.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "identities": { + Type: schema.TypeSet, + Optional: true, + Description: `A list of identities that are allowed access through this ingress policy. Should be in the format of email address. The email address should represent individual user or service account only.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "identity_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), - Description: `Specifies the type of identities that are allowed access from outside the + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Set: schema.HashString, + }, + "identity_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT", ""}), + Description: `Specifies the type of identities that are allowed access from outside the perimeter. If left unspecified, then members of 'identities' field will be allowed access. Possible values: ["IDENTITY_TYPE_UNSPECIFIED", "ANY_IDENTITY", "ANY_USER_ACCOUNT", "ANY_SERVICE_ACCOUNT"]`, - }, - "sources": { - Type: schema.TypeList, - Optional: true, - Description: `Sources that this 'IngressPolicy' authorizes access from.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "access_level": { - Type: schema.TypeString, - Optional: true, - Description: `An 'AccessLevel' resource name that allow resources within the + }, + "sources": { + Type: schema.TypeList, + Optional: true, + Description: `Sources that this 'IngressPolicy' authorizes access from.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "access_level": { + Type: schema.TypeString, + Optional: true, + Description: `An 'AccessLevel' resource name that allow resources within the 'ServicePerimeters' to be accessed from the internet. 'AccessLevels' listed must be in the same policy as this 'ServicePerimeter'. Referencing a nonexistent 'AccessLevel' will cause an error. If no 'AccessLevel' names are listed, @@ -638,170 +732,92 @@ resources within the perimeter can only be accessed via Google Cloud calls with request origins within the perimeter. Example 'accessPolicies/MY_POLICY/accessLevels/MY_LEVEL.' If * is specified, then all IngressSources will be allowed.`, - }, - "resource": { - Type: schema.TypeString, - Optional: true, - Description: `A Google Cloud resource that is allowed to ingress the perimeter. + }, + "resource": { + Type: schema.TypeString, + Optional: true, + Description: `A Google Cloud resource that is allowed to ingress the perimeter. Requests from these resources will be allowed to access perimeter data. Currently only projects are allowed. Format 'projects/{project_number}' The project may be in any Google Cloud organization, not just the organization that the perimeter is defined in. '*' is not allowed, the case of allowing all Google Cloud resources only is not supported.`, - }, - }, - }, - }, - }, - }, }, - "ingress_to": { - Type: schema.TypeList, - Optional: true, - Description: `Defines the conditions on the 'ApiOperation' and request destination that cause + }, + }, + }, + }, + }, + }, + "ingress_to": { + Type: schema.TypeList, + Optional: true, + Description: `Defines the conditions on the 'ApiOperation' and request destination that cause this 'IngressPolicy' to apply.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "operations": { - Type: schema.TypeList, - Optional: true, - Description: `A list of 'ApiOperations' the sources specified in corresponding 'IngressFrom' + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "operations": { + Type: schema.TypeList, + Optional: true, + Description: `A list of 'ApiOperations' the sources specified in corresponding 'IngressFrom' are allowed to perform in this 'ServicePerimeter'.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "method_selectors": { - Type: schema.TypeList, - Optional: true, - Description: `API methods or permissions to allow. Method or permission must belong to + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "method_selectors": { + Type: schema.TypeList, + Optional: true, + Description: `API methods or permissions to allow. Method or permission must belong to the service specified by serviceName field. A single 'MethodSelector' entry with '*' specified for the method field will allow all methods AND permissions for the service specified in 'serviceName'.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "method": { - Type: schema.TypeString, - Optional: true, - Description: `Value for method should be a valid method name for the corresponding + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "method": { + Type: schema.TypeString, + Optional: true, + Description: `Value for method should be a valid method name for the corresponding serviceName in 'ApiOperation'. If '*' used as value for 'method', then ALL methods and permissions are allowed.`, - }, - "permission": { - Type: schema.TypeString, - Optional: true, - Description: `Value for permission should be a valid Cloud IAM permission for the -corresponding 'serviceName' in 'ApiOperation'.`, - }, - }, - }, - }, - "service_name": { - Type: schema.TypeString, - Optional: true, - Description: `The name of the API whose methods or permissions the 'IngressPolicy' or -'EgressPolicy' want to allow. A single 'ApiOperation' with 'serviceName' -field set to '*' will allow all methods AND permissions for all services.`, - }, - }, - }, }, - "resources": { - Type: schema.TypeList, + "permission": { + Type: schema.TypeString, Optional: true, - Description: `A list of resources, currently only projects in the form -'projects/', protected by this 'ServicePerimeter' -that are allowed to be accessed by sources defined in the -corresponding 'IngressFrom'. A request matches if it contains -a resource in this list. If '*' is specified for resources, -then this 'IngressTo' rule will authorize access to all -resources inside the perimeter, provided that the request -also matches the 'operations' field.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, + Description: `Value for permission should be a valid Cloud IAM permission for the +corresponding 'serviceName' in 'ApiOperation'.`, }, }, }, }, + "service_name": { + Type: schema.TypeString, + Optional: true, + Description: `The name of the API whose methods or permissions the 'IngressPolicy' or +'EgressPolicy' want to allow. A single 'ApiOperation' with 'serviceName' +field set to '*' will allow all methods AND permissions for all services.`, + }, }, }, }, "resources": { - Type: schema.TypeList, - Optional: true, - Description: `A list of GCP resources that are inside of the service perimeter. -Currently only projects are allowed. -Format: projects/{project_number}`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - "restricted_services": { Type: schema.TypeSet, Optional: true, - Description: `GCP services that are subject to the Service Perimeter -restrictions. Must contain a list of services. For example, if -'storage.googleapis.com' is specified, access to the storage -buckets inside the perimeter must meet the perimeter's access -restrictions.`, + Description: `A list of resources, currently only projects in the form +'projects/', protected by this 'ServicePerimeter' +that are allowed to be accessed by sources defined in the +corresponding 'IngressFrom'. A request matches if it contains +a resource in this list. If '*' is specified for resources, +then this 'IngressTo' rule will authorize access to all +resources inside the perimeter, provided that the request +also matches the 'operations' field.`, Elem: &schema.Schema{ Type: schema.TypeString, }, Set: schema.HashString, }, - "vpc_accessible_services": { - Type: schema.TypeList, - Optional: true, - Description: `Specifies how APIs are allowed to communicate within the Service -Perimeter.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "allowed_services": { - Type: schema.TypeSet, - Optional: true, - Description: `The list of APIs usable within the Service Perimeter. -Must be empty unless 'enableRestriction' is True.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - Set: schema.HashString, - }, - "enable_restriction": { - Type: schema.TypeBool, - Optional: true, - Description: `Whether to restrict API calls within the Service Perimeter to the -list of APIs specified in 'allowedServices'.`, - }, - }, - }, - }, }, }, }, - "use_explicit_dry_run_spec": { - Type: schema.TypeBool, - Optional: true, - Description: `Use explicit dry run spec flag. Ordinarily, a dry-run spec implicitly exists -for all Service Perimeters, and that spec is identical to the status for those -Service Perimeters. When this flag is set, it inhibits the generation of the -implicit spec, thereby allowing the user to explicitly provide a -configuration ("spec") to use in a dry-run version of the Service Perimeter. -This allows the user to test changes to the enforced config ("status") without -actually enforcing them. This testing is done through analyzing the differences -between currently enforced and suggested restrictions. useExplicitDryRunSpec must -bet set to True if any of the fields in the spec are set to non-default values.`, - }, - "create_time": { - Type: schema.TypeString, - Computed: true, - Description: `Time the AccessPolicy was created in UTC.`, - }, - "update_time": { - Type: schema.TypeString, - Computed: true, - Description: `Time the AccessPolicy was updated in UTC.`, - }, }, } } @@ -1036,14 +1052,14 @@ func flattenAccessContextManagerServicePerimetersServicePerimeters(v interface{} return v } l := v.([]interface{}) - transformed := schema.NewSet(schema.HashResource(accesscontextmanagerServicePerimetersServicePerimetersSchema()), []interface{}{}) + transformed := make([]interface{}, 0, len(l)) for _, raw := range l { original := raw.(map[string]interface{}) if len(original) < 1 { // Do not include empty json objects coming back from the api continue } - transformed.Add(map[string]interface{}{ + transformed = append(transformed, map[string]interface{}{ "name": flattenAccessContextManagerServicePerimetersServicePerimetersName(original["name"], d, config), "title": flattenAccessContextManagerServicePerimetersServicePerimetersTitle(original["title"], d, config), "description": flattenAccessContextManagerServicePerimetersServicePerimetersDescription(original["description"], d, config), @@ -1109,11 +1125,17 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersStatus(v inter return []interface{}{transformed} } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusAccessLevels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusRestrictedServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1154,14 +1176,14 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressP return v } l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) + transformed := schema.NewSet(schema.HashResource(accesscontextmanagerServicePerimetersServicePerimetersServicePerimetersStatusIngressPoliciesSchema()), []interface{}{}) for _, raw := range l { original := raw.(map[string]interface{}) if len(original) < 1 { // Do not include empty json objects coming back from the api continue } - transformed = append(transformed, map[string]interface{}{ + transformed.Add(map[string]interface{}{ "ingress_from": flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressFrom(original["ingressFrom"], d, config), "ingress_to": flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressTo(original["ingressTo"], d, config), }) @@ -1190,7 +1212,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressP } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressFromSources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1236,7 +1261,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressP return []interface{}{transformed} } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1328,7 +1356,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPo } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressTo(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1349,11 +1380,17 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPo return []interface{}{transformed} } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressToExternalResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1430,15 +1467,24 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersSpec(v interfa return []interface{}{transformed} } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecAccessLevels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecRestrictedServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecVpcAccessibleServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1461,7 +1507,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersSpecVpcAccessi } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecVpcAccessibleServicesAllowedServices(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPolicies(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1505,7 +1554,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPol } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoliciesIngressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoliciesIngressFromSources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1551,7 +1603,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPol return []interface{}{transformed} } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoliciesIngressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoliciesIngressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1643,7 +1698,10 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoli } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressFromIdentities(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressTo(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1664,11 +1722,17 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoli return []interface{}{transformed} } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressToResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressToExternalResources(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressToOperations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1726,7 +1790,6 @@ func flattenAccessContextManagerServicePerimetersServicePerimetersUseExplicitDry } func expandAccessContextManagerServicePerimetersServicePerimeters(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - v = v.(*schema.Set).List() l := v.([]interface{}) req := make([]interface{}, 0, len(l)) for _, raw := range l { @@ -1883,10 +1946,12 @@ func expandAccessContextManagerServicePerimetersServicePerimetersStatus(v interf } func expandAccessContextManagerServicePerimetersServicePerimetersStatusResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimetersServicePerimetersStatusAccessLevels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -1931,6 +1996,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersStatusVpcAccess } func expandAccessContextManagerServicePerimetersServicePerimetersStatusIngressPolicies(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() l := v.([]interface{}) req := make([]interface{}, 0, len(l)) for _, raw := range l { @@ -1997,6 +2063,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersStatusIngressPo } func expandAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2064,6 +2131,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersStatusIngressPo } func expandAccessContextManagerServicePerimetersServicePerimetersStatusIngressPoliciesIngressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2197,6 +2265,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersStatusEgressPol } func expandAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2234,10 +2303,12 @@ func expandAccessContextManagerServicePerimetersServicePerimetersStatusEgressPol } func expandAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimetersServicePerimetersStatusEgressPoliciesEgressToExternalResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2366,14 +2437,17 @@ func expandAccessContextManagerServicePerimetersServicePerimetersSpec(v interfac } func expandAccessContextManagerServicePerimetersServicePerimetersSpecResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimetersServicePerimetersSpecAccessLevels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimetersServicePerimetersSpecRestrictedServices(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2408,6 +2482,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersSpecVpcAccessib } func expandAccessContextManagerServicePerimetersServicePerimetersSpecVpcAccessibleServicesAllowedServices(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2478,6 +2553,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoli } func expandAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoliciesIngressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2545,6 +2621,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoli } func expandAccessContextManagerServicePerimetersServicePerimetersSpecIngressPoliciesIngressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2678,6 +2755,7 @@ func expandAccessContextManagerServicePerimetersServicePerimetersSpecEgressPolic } func expandAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressFromIdentities(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -2715,10 +2793,12 @@ func expandAccessContextManagerServicePerimetersServicePerimetersSpecEgressPolic } func expandAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressToResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } func expandAccessContextManagerServicePerimetersServicePerimetersSpecEgressPoliciesEgressToExternalResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } diff --git a/google/services/accesscontextmanager/resource_access_context_manager_services_perimeters_test.go b/google/services/accesscontextmanager/resource_access_context_manager_services_perimeters_test.go index c02e5aff99a..4bc553412ff 100644 --- a/google/services/accesscontextmanager/resource_access_context_manager_services_perimeters_test.go +++ b/google/services/accesscontextmanager/resource_access_context_manager_services_perimeters_test.go @@ -201,17 +201,156 @@ resource "google_access_context_manager_service_perimeters" "test-access" { name = "accessPolicies/${google_access_context_manager_access_policy.test-access.name}/servicePerimeters/%s" title = "%s" perimeter_type = "PERIMETER_TYPE_REGULAR" + use_explicit_dry_run_spec = true + spec { + restricted_services = ["bigquery.googleapis.com", "storage.googleapis.com"] + access_levels = [google_access_context_manager_access_level.test-access.name] + + vpc_accessible_services { + enable_restriction = true + allowed_services = ["bigquery.googleapis.com", "storage.googleapis.com"] + } + + ingress_policies { + ingress_from { + sources { + access_level = google_access_context_manager_access_level.test-access.name + } + identity_type = "ANY_IDENTITY" + } + + ingress_to { + resources = [ "*" ] + operations { + service_name = "bigquery.googleapis.com" + + method_selectors { + method = "BigQueryStorage.ReadRows" + } + + method_selectors { + method = "TableService.ListTables" + } + + method_selectors { + permission = "bigquery.jobs.get" + } + } + + operations { + service_name = "storage.googleapis.com" + + method_selectors { + method = "google.storage.objects.create" + } + } + } + } + ingress_policies { + ingress_from { + identities = ["user:test@google.com"] + } + ingress_to { + resources = ["*"] + } + } + + egress_policies { + egress_from { + identity_type = "ANY_USER_ACCOUNT" + } + egress_to { + operations { + service_name = "bigquery.googleapis.com" + method_selectors { + permission = "externalResource.read" + } + } + external_resources = ["s3://bucket1"] + } + } + egress_policies { + egress_from { + identities = ["user:test@google.com"] + } + egress_to { + resources = ["*"] + } + } + } status { - restricted_services = ["bigquery.googleapis.com"] + restricted_services = ["bigquery.googleapis.com", "storage.googleapis.com"] + access_levels = [google_access_context_manager_access_level.test-access.name] + + vpc_accessible_services { + enable_restriction = true + allowed_services = ["bigquery.googleapis.com", "storage.googleapis.com"] + } + + ingress_policies { + ingress_from { + sources { + access_level = google_access_context_manager_access_level.test-access.name + } + identity_type = "ANY_IDENTITY" + } + + ingress_to { + resources = [ "*" ] + operations { + service_name = "bigquery.googleapis.com" + + method_selectors { + method = "BigQueryStorage.ReadRows" + } + + method_selectors { + method = "TableService.ListTables" + } + + method_selectors { + permission = "bigquery.jobs.get" + } + } + + operations { + service_name = "storage.googleapis.com" + + method_selectors { + method = "google.storage.objects.create" + } + } + } + } + ingress_policies { + ingress_from { + identities = ["user:test@google.com"] + } + ingress_to { + resources = ["*"] + } + } + egress_policies { + egress_from { + identity_type = "ANY_USER_ACCOUNT" + } egress_to { - external_resources = ["s3://bucket2"] operations { service_name = "bigquery.googleapis.com" method_selectors { - method = "*" + permission = "externalResource.read" } } + external_resources = ["s3://bucket1"] + } + } + egress_policies { + egress_from { + identities = ["user:test@google.com"] + } + egress_to { + resources = ["*"] } } } diff --git a/google/services/activedirectory/resource_active_directory_domain.go b/google/services/activedirectory/resource_active_directory_domain.go index b3dc711844b..a482ceefb68 100644 --- a/google/services/activedirectory/resource_active_directory_domain.go +++ b/google/services/activedirectory/resource_active_directory_domain.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceActiveDirectoryDomain() *schema.Resource { Delete: schema.DefaultTimeout(60 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "domain_name": { Type: schema.TypeString, @@ -92,9 +98,18 @@ If CIDR subnets overlap between networks, domain creation will fail.`, Set: schema.HashString, }, "labels": { + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels that can contain user-provided metadata + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: `Resource labels that can contain user-provided metadata`, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "fqdn": { @@ -108,6 +123,13 @@ Similar to what would be chosen for an Active Directory set up on an internal ne Computed: true, Description: `The unique name of the domain using the format: 'projects/{project}/locations/global/domains/{domainName}'.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -127,12 +149,6 @@ func resourceActiveDirectoryDomainCreate(d *schema.ResourceData, meta interface{ } obj := make(map[string]interface{}) - labelsProp, err := expandActiveDirectoryDomainLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } authorizedNetworksProp, err := expandActiveDirectoryDomainAuthorizedNetworks(d.Get("authorized_networks"), d, config) if err != nil { return err @@ -157,6 +173,12 @@ func resourceActiveDirectoryDomainCreate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("admin"); !tpgresource.IsEmptyValue(reflect.ValueOf(adminProp)) && (ok || !reflect.DeepEqual(v, adminProp)) { obj["admin"] = adminProp } + labelsProp, err := expandActiveDirectoryDomainEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ActiveDirectoryBasePath}}projects/{{project}}/locations/global/domains?domainName={{domain_name}}") if err != nil { @@ -289,6 +311,12 @@ func resourceActiveDirectoryDomainRead(d *schema.ResourceData, meta interface{}) if err := d.Set("fqdn", flattenActiveDirectoryDomainFqdn(res["fqdn"], d, config)); err != nil { return fmt.Errorf("Error reading Domain: %s", err) } + if err := d.Set("terraform_labels", flattenActiveDirectoryDomainTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Domain: %s", err) + } + if err := d.Set("effective_labels", flattenActiveDirectoryDomainEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Domain: %s", err) + } return nil } @@ -309,12 +337,6 @@ func resourceActiveDirectoryDomainUpdate(d *schema.ResourceData, meta interface{ billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandActiveDirectoryDomainLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } authorizedNetworksProp, err := expandActiveDirectoryDomainAuthorizedNetworks(d.Get("authorized_networks"), d, config) if err != nil { return err @@ -327,6 +349,12 @@ func resourceActiveDirectoryDomainUpdate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("locations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, locationsProp)) { obj["locations"] = locationsProp } + labelsProp, err := expandActiveDirectoryDomainEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ActiveDirectoryBasePath}}{{name}}") if err != nil { @@ -336,10 +364,6 @@ func resourceActiveDirectoryDomainUpdate(d *schema.ResourceData, meta interface{ log.Printf("[DEBUG] Updating Domain %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("authorized_networks") { updateMask = append(updateMask, "authorizedNetworks") } @@ -347,6 +371,10 @@ func resourceActiveDirectoryDomainUpdate(d *schema.ResourceData, meta interface{ if d.HasChange("locations") { updateMask = append(updateMask, "locations") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -458,7 +486,18 @@ func flattenActiveDirectoryDomainName(v interface{}, d *schema.ResourceData, con } func flattenActiveDirectoryDomainLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenActiveDirectoryDomainAuthorizedNetworks(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -484,15 +523,23 @@ func flattenActiveDirectoryDomainFqdn(v interface{}, d *schema.ResourceData, con return v } -func expandActiveDirectoryDomainLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenActiveDirectoryDomainTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenActiveDirectoryDomainEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandActiveDirectoryDomainAuthorizedNetworks(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -511,3 +558,14 @@ func expandActiveDirectoryDomainLocations(v interface{}, d tpgresource.Terraform func expandActiveDirectoryDomainAdmin(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandActiveDirectoryDomainEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/activedirectory/resource_active_directory_domain_trust.go b/google/services/activedirectory/resource_active_directory_domain_trust.go index 125362997a6..23c158fcb83 100644 --- a/google/services/activedirectory/resource_active_directory_domain_trust.go +++ b/google/services/activedirectory/resource_active_directory_domain_trust.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceActiveDirectoryDomainTrust() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "domain": { Type: schema.TypeString, @@ -513,9 +518,9 @@ func resourceActiveDirectoryDomainTrustDelete(d *schema.ResourceData, meta inter func resourceActiveDirectoryDomainTrustImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/domains/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/domains/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/activedirectory/resource_active_directory_domain_update_test.go b/google/services/activedirectory/resource_active_directory_domain_update_test.go index ebc3d6aa1c4..5ca49632f78 100644 --- a/google/services/activedirectory/resource_active_directory_domain_update_test.go +++ b/google/services/activedirectory/resource_active_directory_domain_update_test.go @@ -41,7 +41,7 @@ func TestAccActiveDirectoryDomain_update(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"domain_name"}, + ImportStateVerifyIgnore: []string{"domain_name", "labels", "terraform_labels"}, }, { Config: testAccADDomainUpdate(context), @@ -50,7 +50,7 @@ func TestAccActiveDirectoryDomain_update(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"domain_name"}, + ImportStateVerifyIgnore: []string{"domain_name", "labels", "terraform_labels"}, }, { Config: testAccADDomainBasic(context), @@ -59,7 +59,7 @@ func TestAccActiveDirectoryDomain_update(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"domain_name"}, + ImportStateVerifyIgnore: []string{"domain_name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/alloydb/data_source_alloydb_locations.go b/google/services/alloydb/data_source_alloydb_locations.go index 8351f5bce24..81776b1210a 100644 --- a/google/services/alloydb/data_source_alloydb_locations.go +++ b/google/services/alloydb/data_source_alloydb_locations.go @@ -96,7 +96,7 @@ func dataSourceAlloydbLocationsRead(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Locations %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Locations %q", d.Id()), url) } var locations []map[string]interface{} for { @@ -144,7 +144,7 @@ func dataSourceAlloydbLocationsRead(d *schema.ResourceData, meta interface{}) er UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Locations %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Locations %q", d.Id()), url) } } diff --git a/google/services/alloydb/data_source_alloydb_supported_database_flags.go b/google/services/alloydb/data_source_alloydb_supported_database_flags.go index 3687efd7c7a..722760fce96 100644 --- a/google/services/alloydb/data_source_alloydb_supported_database_flags.go +++ b/google/services/alloydb/data_source_alloydb_supported_database_flags.go @@ -149,7 +149,7 @@ func dataSourceAlloydbSupportedDatabaseFlagsRead(d *schema.ResourceData, meta in UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SupportedDatabaseFlags %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("SupportedDatabaseFlags %q", d.Id()), url) } var supportedDatabaseFlags []map[string]interface{} for { @@ -223,7 +223,7 @@ func dataSourceAlloydbSupportedDatabaseFlagsRead(d *schema.ResourceData, meta in UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("SupportedDatabaseFlags %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("SupportedDatabaseFlags %q", d.Id()), url) } } if err := d.Set("supported_database_flags", supportedDatabaseFlags); err != nil { diff --git a/google/services/alloydb/resource_alloydb_backup.go b/google/services/alloydb/resource_alloydb_backup.go index 9d278af428d..c06a14c3bdd 100644 --- a/google/services/alloydb/resource_alloydb_backup.go +++ b/google/services/alloydb/resource_alloydb_backup.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceAlloydbBackup() *schema.Resource { Delete: schema.DefaultTimeout(10 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backup_id": { Type: schema.TypeString, @@ -625,9 +630,9 @@ func resourceAlloydbBackupDelete(d *schema.ResourceData, meta interface{}) error func resourceAlloydbBackupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/backups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/backups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/alloydb/resource_alloydb_backup_generated_test.go b/google/services/alloydb/resource_alloydb_backup_generated_test.go index 7313b875f6b..572bafce8fb 100644 --- a/google/services/alloydb/resource_alloydb_backup_generated_test.go +++ b/google/services/alloydb/resource_alloydb_backup_generated_test.go @@ -34,7 +34,6 @@ func TestAccAlloydbBackup_alloydbBackupBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydb-backup-basic"), "random_suffix": acctest.RandString(t, 10), } @@ -69,7 +68,7 @@ resource "google_alloydb_backup" "default" { resource "google_alloydb_cluster" "default" { cluster_id = "tf-test-alloydb-cluster%{random_suffix}" location = "us-central1" - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_alloydb_instance" "default" { @@ -85,17 +84,17 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } -data "google_compute_network" "default" { - name = "%{network_name}" +resource "google_compute_network" "default" { + name = "tf-test-alloydb-network%{random_suffix}" } `, context) } @@ -104,7 +103,6 @@ func TestAccAlloydbBackup_alloydbBackupFullExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydb-backup-full"), "random_suffix": acctest.RandString(t, 10), } @@ -144,7 +142,7 @@ resource "google_alloydb_backup" "default" { resource "google_alloydb_cluster" "default" { cluster_id = "tf-test-alloydb-cluster%{random_suffix}" location = "us-central1" - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_alloydb_instance" "default" { @@ -160,17 +158,17 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } -data "google_compute_network" "default" { - name = "%{network_name}" +resource "google_compute_network" "default" { + name = "tf-test-alloydb-network%{random_suffix}" } `, context) } diff --git a/google/services/alloydb/resource_alloydb_backup_test.go b/google/services/alloydb/resource_alloydb_backup_test.go index 6a964aae1d5..4f8c4e7a46e 100644 --- a/google/services/alloydb/resource_alloydb_backup_test.go +++ b/google/services/alloydb/resource_alloydb_backup_test.go @@ -12,9 +12,10 @@ import ( func TestAccAlloydbBackup_update(t *testing.T) { t.Parallel() + random_suffix := acctest.RandString(t, 10) context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydb-backup-update"), - "random_suffix": acctest.RandString(t, 10), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-backup-update-1"), + "random_suffix": random_suffix, } acctest.VcrTest(t, resource.TestCase{ @@ -23,13 +24,13 @@ func TestAccAlloydbBackup_update(t *testing.T) { CheckDestroy: testAccCheckAlloydbBackupDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccAlloydbBackup_alloydbBackupFullExample(context), + Config: testAccAlloydbBackup_alloydbBackupBasic(context), }, { ResourceName: "google_alloydb_backup.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backup_id", "location", "reconciling", "update_time"}, + ImportStateVerifyIgnore: []string{"backup_id", "location", "reconciling", "update_time", "labels", "terraform_labels"}, }, { Config: testAccAlloydbBackup_update(context), @@ -38,14 +39,13 @@ func TestAccAlloydbBackup_update(t *testing.T) { ResourceName: "google_alloydb_backup.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backup_id", "location", "reconciling", "update_time"}, + ImportStateVerifyIgnore: []string{"backup_id", "location", "reconciling", "update_time", "labels", "terraform_labels"}, }, }, }) } -// Updates "label" field from testAccAlloydbBackup_alloydbBackupFullExample -func testAccAlloydbBackup_update(context map[string]interface{}) string { +func testAccAlloydbBackup_alloydbBackupBasic(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_alloydb_backup" "default" { location = "us-central1" @@ -54,8 +54,7 @@ resource "google_alloydb_backup" "default" { description = "example description" labels = { - "label" = "updated_key" - "label2" = "updated_key2" + "label" = "key" } depends_on = [google_alloydb_instance.default] } @@ -70,22 +69,40 @@ resource "google_alloydb_instance" "default" { cluster = google_alloydb_cluster.default.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" +} - depends_on = [google_service_networking_connection.vpc_connection] +data "google_compute_network" "default" { + name = "%{network_name}" +} +`, context) } -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id +// Updates "label" field +func testAccAlloydbBackup_update(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_alloydb_backup" "default" { + location = "us-central1" + backup_id = "tf-test-alloydb-backup%{random_suffix}" + cluster_name = google_alloydb_cluster.default.name + + description = "example description" + labels = { + "label" = "updated_key" + "label2" = "updated_key2" + } + depends_on = [google_alloydb_instance.default] } -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] +resource "google_alloydb_cluster" "default" { + cluster_id = "tf-test-alloydb-cluster%{random_suffix}" + location = "us-central1" + network = data.google_compute_network.default.id +} + +resource "google_alloydb_instance" "default" { + cluster = google_alloydb_cluster.default.name + instance_id = "tf-test-alloydb-instance%{random_suffix}" + instance_type = "PRIMARY" } data "google_compute_network" "default" { @@ -100,7 +117,7 @@ func TestAccAlloydbBackup_createBackupWithMandatoryFields(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbbackup-mandatory"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-backup-mandatory-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -140,32 +157,6 @@ resource "google_alloydb_instance" "default" { cluster = google_alloydb_cluster.default.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] -} - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id - lifecycle { - ignore_changes = [ - address, - creation_timestamp, - id, - network, - project, - self_link - ] - } -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } `, context) } @@ -174,7 +165,7 @@ func TestAccAlloydbBackup_usingCMEK(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydb-backup-cmek"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-backup-cmek-1"), "random_suffix": acctest.RandString(t, 10), "key_name": "tf-test-key-" + acctest.RandString(t, 10), } @@ -191,7 +182,7 @@ func TestAccAlloydbBackup_usingCMEK(t *testing.T) { ResourceName: "google_alloydb_backup.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backup_id", "location", "reconciling", "update_time"}, + ImportStateVerifyIgnore: []string{"backup_id", "location", "reconciling", "update_time", "labels", "terraform_labels"}, }, }, }) @@ -224,22 +215,6 @@ resource "google_alloydb_instance" "default" { cluster = google_alloydb_cluster.default.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] -} - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } data "google_compute_network" "default" { diff --git a/google/services/alloydb/resource_alloydb_cluster.go b/google/services/alloydb/resource_alloydb_cluster.go index 0ae6467333d..0824ea08190 100644 --- a/google/services/alloydb/resource_alloydb_cluster.go +++ b/google/services/alloydb/resource_alloydb_cluster.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceAlloydbCluster() *schema.Resource { Delete: schema.DefaultTimeout(10 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "cluster_id": { Type: schema.TypeString, @@ -295,10 +301,13 @@ If not set, defaults to 14 days.`, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `User-defined labels for the alloydb cluster.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels for the alloydb cluster. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "network": { Type: schema.TypeString, @@ -446,6 +455,12 @@ It is specified in the form: "projects/{projectNumber}/global/networks/{network_ Computed: true, Description: `The database engine major version. This is an output-only field and it's populated at the Cluster creation time. This field cannot be changed after cluster creation.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "encryption_info": { Type: schema.TypeList, Computed: true, @@ -509,6 +524,13 @@ This can happen due to user-triggered updates or system actions like failover or Computed: true, Description: `Output only. The current serving state of the cluster.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -533,12 +555,6 @@ func resourceAlloydbClusterCreate(d *schema.ResourceData, meta interface{}) erro } obj := make(map[string]interface{}) - labelsProp, err := expandAlloydbClusterLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } encryptionConfigProp, err := expandAlloydbClusterEncryptionConfig(d.Get("encryption_config"), d, config) if err != nil { return err @@ -605,6 +621,12 @@ func resourceAlloydbClusterCreate(d *schema.ResourceData, meta interface{}) erro } else if v, ok := d.GetOkExists("automated_backup_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(automatedBackupPolicyProp)) && (ok || !reflect.DeepEqual(v, automatedBackupPolicyProp)) { obj["automatedBackupPolicy"] = automatedBackupPolicyProp } + labelsProp, err := expandAlloydbClusterEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{AlloydbBasePath}}projects/{{project}}/locations/{{location}}/clusters?clusterId={{cluster_id}}") if err != nil { @@ -787,6 +809,12 @@ func resourceAlloydbClusterRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("migration_source", flattenAlloydbClusterMigrationSource(res["migrationSource"], d, config)); err != nil { return fmt.Errorf("Error reading Cluster: %s", err) } + if err := d.Set("terraform_labels", flattenAlloydbClusterTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Cluster: %s", err) + } + if err := d.Set("effective_labels", flattenAlloydbClusterEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Cluster: %s", err) + } return nil } @@ -807,12 +835,6 @@ func resourceAlloydbClusterUpdate(d *schema.ResourceData, meta interface{}) erro billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandAlloydbClusterLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } encryptionConfigProp, err := expandAlloydbClusterEncryptionConfig(d.Get("encryption_config"), d, config) if err != nil { return err @@ -867,6 +889,12 @@ func resourceAlloydbClusterUpdate(d *schema.ResourceData, meta interface{}) erro } else if v, ok := d.GetOkExists("automated_backup_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, automatedBackupPolicyProp)) { obj["automatedBackupPolicy"] = automatedBackupPolicyProp } + labelsProp, err := expandAlloydbClusterEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{AlloydbBasePath}}projects/{{project}}/locations/{{location}}/clusters/{{cluster_id}}") if err != nil { @@ -876,10 +904,6 @@ func resourceAlloydbClusterUpdate(d *schema.ResourceData, meta interface{}) erro log.Printf("[DEBUG] Updating Cluster %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("encryption_config") { updateMask = append(updateMask, "encryptionConfig") } @@ -915,6 +939,10 @@ func resourceAlloydbClusterUpdate(d *schema.ResourceData, meta interface{}) erro if d.HasChange("automated_backup_policy") { updateMask = append(updateMask, "automatedBackupPolicy") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -1010,10 +1038,10 @@ func resourceAlloydbClusterDelete(d *schema.ResourceData, meta interface{}) erro func resourceAlloydbClusterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/clusters/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/clusters/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1037,7 +1065,18 @@ func flattenAlloydbClusterUid(v interface{}, d *schema.ResourceData, config *tra } func flattenAlloydbClusterLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenAlloydbClusterEncryptionConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1502,15 +1541,23 @@ func flattenAlloydbClusterMigrationSourceSourceType(v interface{}, d *schema.Res return v } -func expandAlloydbClusterLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenAlloydbClusterTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenAlloydbClusterEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandAlloydbClusterEncryptionConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -1996,3 +2043,14 @@ func expandAlloydbClusterAutomatedBackupPolicyQuantityBasedRetentionCount(v inte func expandAlloydbClusterAutomatedBackupPolicyEnabled(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandAlloydbClusterEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/alloydb/resource_alloydb_cluster_generated_test.go b/google/services/alloydb/resource_alloydb_cluster_generated_test.go index c6b6bddf127..a841ef90fbd 100644 --- a/google/services/alloydb/resource_alloydb_cluster_generated_test.go +++ b/google/services/alloydb/resource_alloydb_cluster_generated_test.go @@ -49,7 +49,7 @@ func TestAccAlloydbCluster_alloydbClusterBasicExample(t *testing.T) { ResourceName: "google_alloydb_cluster.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"initial_user", "restore_backup_source", "restore_continuous_backup_source", "cluster_id", "location"}, + ImportStateVerifyIgnore: []string{"initial_user", "restore_backup_source", "restore_continuous_backup_source", "cluster_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -90,7 +90,7 @@ func TestAccAlloydbCluster_alloydbClusterFullExample(t *testing.T) { ResourceName: "google_alloydb_cluster.full", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"initial_user", "restore_backup_source", "restore_continuous_backup_source", "cluster_id", "location"}, + ImportStateVerifyIgnore: []string{"initial_user", "restore_backup_source", "restore_continuous_backup_source", "cluster_id", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/alloydb/resource_alloydb_cluster_restore_test.go b/google/services/alloydb/resource_alloydb_cluster_restore_test.go index 511f65d0dfc..35701b56266 100644 --- a/google/services/alloydb/resource_alloydb_cluster_restore_test.go +++ b/google/services/alloydb/resource_alloydb_cluster_restore_test.go @@ -21,7 +21,7 @@ func TestAccAlloydbCluster_restore(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-restore"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-instance-restore-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -95,8 +95,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -112,20 +110,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -142,8 +126,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -176,20 +158,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -206,8 +174,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -237,20 +203,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -266,8 +218,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -296,20 +246,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -327,8 +263,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -371,20 +305,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -402,8 +322,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -456,20 +374,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -487,8 +391,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -551,20 +453,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -582,8 +470,6 @@ resource "google_alloydb_instance" "source" { cluster = google_alloydb_cluster.source.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_backup" "default" { @@ -618,19 +504,5 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } diff --git a/google/services/alloydb/resource_alloydb_cluster_test.go b/google/services/alloydb/resource_alloydb_cluster_test.go index 99794c3fe26..5db9ca950d4 100644 --- a/google/services/alloydb/resource_alloydb_cluster_test.go +++ b/google/services/alloydb/resource_alloydb_cluster_test.go @@ -28,7 +28,7 @@ func TestAccAlloydbCluster_update(t *testing.T) { ResourceName: "google_alloydb_cluster.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"initial_user", "cluster_id", "location"}, + ImportStateVerifyIgnore: []string{"initial_user", "cluster_id", "location", "labels", "terraform_labels"}, }, { Config: testAccAlloydbCluster_update(context), @@ -37,7 +37,7 @@ func TestAccAlloydbCluster_update(t *testing.T) { ResourceName: "google_alloydb_cluster.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"initial_user", "cluster_id", "location"}, + ImportStateVerifyIgnore: []string{"initial_user", "cluster_id", "location", "labels", "terraform_labels"}, }, { Config: testAccAlloydbCluster_alloydbClusterBasicExample(context), diff --git a/google/services/alloydb/resource_alloydb_instance.go b/google/services/alloydb/resource_alloydb_instance.go index 188eb31bbad..7f746efc01d 100644 --- a/google/services/alloydb/resource_alloydb_instance.go +++ b/google/services/alloydb/resource_alloydb_instance.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceAlloydbInstance() *schema.Resource { Delete: schema.DefaultTimeout(40 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.SetAnnotationsDiff, + ), + Schema: map[string]*schema.Schema{ "cluster": { Type: schema.TypeString, @@ -105,10 +111,13 @@ can have regional availability (nodes are present in 2 or more zones in a region Description: `The Compute Engine zone that the instance should serve from, per https://cloud.google.com/compute/docs/regions-zones This can ONLY be specified for ZONAL instances. If present for a REGIONAL instance, an error will be thrown. If this is absent for a ZONAL instance, instance is created in a random zone with available capacity.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `User-defined labels for the alloydb instance.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels for the alloydb instance. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "machine_config": { Type: schema.TypeList, @@ -178,6 +187,18 @@ can have regional availability (nodes are present in 2 or more zones in a region Computed: true, Description: `Time the Instance was created in UTC.`, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "ip_address": { Type: schema.TypeString, Computed: true, @@ -198,6 +219,13 @@ can have regional availability (nodes are present in 2 or more zones in a region Computed: true, Description: `The current state of the alloydb instance.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -222,18 +250,6 @@ func resourceAlloydbInstanceCreate(d *schema.ResourceData, meta interface{}) err } obj := make(map[string]interface{}) - labelsProp, err := expandAlloydbInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandAlloydbInstanceAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } displayNameProp, err := expandAlloydbInstanceDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -282,6 +298,18 @@ func resourceAlloydbInstanceCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("machine_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(machineConfigProp)) && (ok || !reflect.DeepEqual(v, machineConfigProp)) { obj["machineConfig"] = machineConfigProp } + labelsProp, err := expandAlloydbInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandAlloydbInstanceEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{AlloydbBasePath}}{{cluster}}/instances?instanceId={{instance_id}}") if err != nil { @@ -409,6 +437,15 @@ func resourceAlloydbInstanceRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("machine_config", flattenAlloydbInstanceMachineConfig(res["machineConfig"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenAlloydbInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenAlloydbInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_annotations", flattenAlloydbInstanceEffectiveAnnotations(res["annotations"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -424,18 +461,6 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err billingProject := "" obj := make(map[string]interface{}) - labelsProp, err := expandAlloydbInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandAlloydbInstanceAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } displayNameProp, err := expandAlloydbInstanceDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -478,6 +503,18 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("machine_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, machineConfigProp)) { obj["machineConfig"] = machineConfigProp } + labelsProp, err := expandAlloydbInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandAlloydbInstanceEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{AlloydbBasePath}}{{cluster}}/instances/{{instance_id}}") if err != nil { @@ -487,14 +524,6 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err log.Printf("[DEBUG] Updating Instance %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - - if d.HasChange("annotations") { - updateMask = append(updateMask, "annotations") - } - if d.HasChange("display_name") { updateMask = append(updateMask, "displayName") } @@ -522,6 +551,14 @@ func resourceAlloydbInstanceUpdate(d *schema.ResourceData, meta interface{}) err if d.HasChange("machine_config") { updateMask = append(updateMask, "machineConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } + + if d.HasChange("effective_annotations") { + updateMask = append(updateMask, "annotations") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -646,11 +683,33 @@ func flattenAlloydbInstanceUid(v interface{}, d *schema.ResourceData, config *tr } func flattenAlloydbInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenAlloydbInstanceAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenAlloydbInstanceState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -802,26 +861,27 @@ func flattenAlloydbInstanceMachineConfigCpuCount(v interface{}, d *schema.Resour return v // let terraform core handle it otherwise } -func expandAlloydbInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenAlloydbInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed } -func expandAlloydbInstanceAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil +func flattenAlloydbInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenAlloydbInstanceEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandAlloydbInstanceDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -952,3 +1012,25 @@ func expandAlloydbInstanceMachineConfig(v interface{}, d tpgresource.TerraformRe func expandAlloydbInstanceMachineConfigCpuCount(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandAlloydbInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + +func expandAlloydbInstanceEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/alloydb/resource_alloydb_instance_generated_test.go b/google/services/alloydb/resource_alloydb_instance_generated_test.go index cdd3112c7e4..1e2f6aa572f 100644 --- a/google/services/alloydb/resource_alloydb_instance_generated_test.go +++ b/google/services/alloydb/resource_alloydb_instance_generated_test.go @@ -34,7 +34,6 @@ func TestAccAlloydbInstance_alloydbInstanceBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydb-instance-basic"), "random_suffix": acctest.RandString(t, 10), } @@ -50,7 +49,7 @@ func TestAccAlloydbInstance_alloydbInstanceBasicExample(t *testing.T) { ResourceName: "google_alloydb_instance.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"display_name", "cluster", "instance_id", "reconciling", "update_time"}, + ImportStateVerifyIgnore: []string{"display_name", "cluster", "instance_id", "reconciling", "update_time", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -73,7 +72,7 @@ resource "google_alloydb_instance" "default" { resource "google_alloydb_cluster" "default" { cluster_id = "tf-test-alloydb-cluster%{random_suffix}" location = "us-central1" - network = data.google_compute_network.default.id + network = google_compute_network.default.id initial_user { password = "tf-test-alloydb-cluster%{random_suffix}" @@ -82,8 +81,8 @@ resource "google_alloydb_cluster" "default" { data "google_project" "project" {} -data "google_compute_network" "default" { - name = "%{network_name}" +resource "google_compute_network" "default" { + name = "tf-test-alloydb-network%{random_suffix}" } resource "google_compute_global_address" "private_ip_alloc" { @@ -91,11 +90,11 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } diff --git a/google/services/alloydb/resource_alloydb_instance_test.go b/google/services/alloydb/resource_alloydb_instance_test.go index b6207dea31d..73bc2b08c65 100644 --- a/google/services/alloydb/resource_alloydb_instance_test.go +++ b/google/services/alloydb/resource_alloydb_instance_test.go @@ -3,18 +3,21 @@ package alloydb_test import ( + "fmt" "testing" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-provider-google/google/acctest" + "github.com/hashicorp/terraform-provider-google/google/envvar" ) func TestAccAlloydbInstance_update(t *testing.T) { t.Parallel() + random_suffix := acctest.RandString(t, 10) context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-update"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-instance-update-1"), + "random_suffix": random_suffix, } acctest.VcrTest(t, resource.TestCase{ @@ -23,7 +26,7 @@ func TestAccAlloydbInstance_update(t *testing.T) { CheckDestroy: testAccCheckAlloydbInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccAlloydbInstance_alloydbInstanceBasicExample(context), + Config: testAccAlloydbInstance_alloydbInstanceBasic(context), }, { ResourceName: "google_alloydb_instance.default", @@ -38,12 +41,40 @@ func TestAccAlloydbInstance_update(t *testing.T) { ResourceName: "google_alloydb_instance.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"cluster", "instance_id", "reconciling", "update_time"}, + ImportStateVerifyIgnore: []string{"cluster", "instance_id", "reconciling", "update_time", "labels", "terraform_labels"}, }, }, }) } +func testAccAlloydbInstance_alloydbInstanceBasic(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_alloydb_instance" "default" { + cluster = google_alloydb_cluster.default.name + instance_id = "tf-test-alloydb-instance%{random_suffix}" + instance_type = "PRIMARY" + + machine_config { + cpu_count = 2 + } +} + +resource "google_alloydb_cluster" "default" { + cluster_id = "tf-test-alloydb-cluster%{random_suffix}" + location = "us-central1" + network = data.google_compute_network.default.id + + initial_user { + password = "tf-test-alloydb-cluster%{random_suffix}" + } +} + +data "google_compute_network" "default" { + name = "%{network_name}" +} +`, context) +} + func testAccAlloydbInstance_update(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_alloydb_instance" "default" { @@ -58,8 +89,6 @@ resource "google_alloydb_instance" "default" { labels = { test = "tf-test-alloydb-instance%{random_suffix}" } - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_cluster" "default" { @@ -72,26 +101,9 @@ resource "google_alloydb_cluster" "default" { } } -data "google_project" "project" { -} - data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -101,7 +113,7 @@ func TestAccAlloydbInstance_createInstanceWithMandatoryFields(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-mandatory"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-instance-mandatory-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -122,8 +134,6 @@ resource "google_alloydb_instance" "default" { cluster = google_alloydb_cluster.default.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_cluster" "default" { @@ -137,20 +147,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -160,7 +156,7 @@ resource "google_service_networking_connection" "vpc_connection" { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-maximum"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-instance-maximum-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -221,20 +217,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) }*/ @@ -244,7 +226,7 @@ func TestAccAlloydbInstance_createPrimaryAndReadPoolInstance(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-readpool"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-instance-readpool-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -265,7 +247,6 @@ resource "google_alloydb_instance" "primary" { cluster = google_alloydb_cluster.default.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_instance" "read_pool" { @@ -275,7 +256,7 @@ resource "google_alloydb_instance" "read_pool" { read_pool_config { node_count = 4 } - depends_on = [google_service_networking_connection.vpc_connection, google_alloydb_instance.primary] + depends_on = [google_alloydb_instance.primary] } resource "google_alloydb_cluster" "default" { @@ -289,20 +270,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -312,7 +279,7 @@ resource "google_service_networking_connection" "vpc_connection" { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-updatedb"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "alloydb-instance-updatedb-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -344,7 +311,6 @@ resource "google_alloydb_instance" "primary" { database_flags = { "alloydb.enable_auto_explain" = "true" } - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_cluster" "default" { @@ -358,20 +324,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) }*/ @@ -384,7 +336,6 @@ resource "google_alloydb_instance" "primary" { database_flags = { "alloydb.enable_auto_explain" = "false" } - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_cluster" "default" { @@ -398,20 +349,6 @@ data "google_project" "project" {} data "google_compute_network" "default" { name = "%{network_name}" } - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} `, context) } @@ -419,9 +356,17 @@ resource "google_service_networking_connection" "vpc_connection" { func TestAccAlloydbInstance_createInstanceWithNetworkConfigAndAllocatedIPRange(t *testing.T) { t.Parallel() + projectNumber := envvar.GetTestProjectNumberFromEnv() + testId := "alloydbinstance-network-config-1" + networkName := acctest.BootstrapSharedTestNetwork(t, testId) + networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + addressName := acctest.BootstrapSharedTestGlobalAddress(t, testId, networkId) + acctest.BootstrapSharedServiceNetworkingConnection(t, testId) + context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "alloydbinstance-network-config"), + "network_name": networkName, + "address_name": addressName, } acctest.VcrTest(t, resource.TestCase{ @@ -442,7 +387,6 @@ resource "google_alloydb_instance" "default" { cluster = google_alloydb_cluster.default.name instance_id = "tf-test-alloydb-instance%{random_suffix}" instance_type = "PRIMARY" - depends_on = [google_service_networking_connection.vpc_connection] } resource "google_alloydb_cluster" "default" { @@ -450,29 +394,16 @@ resource "google_alloydb_cluster" "default" { location = "us-central1" network_config { network = data.google_compute_network.default.id - allocated_ip_range = google_compute_global_address.private_ip_alloc.name + allocated_ip_range = data.google_compute_global_address.private_ip_alloc.name } - } -data "google_project" "project" {} - data "google_compute_network" "default" { name = "%{network_name}" } -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-alloydb-cluster%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] +data "google_compute_global_address" "private_ip_alloc" { + name = "%{address_name}" } `, context) } diff --git a/google/services/apigee/resource_apigee_sync_authorization.go b/google/services/apigee/resource_apigee_sync_authorization.go index bc988ecbd1f..12a36558855 100644 --- a/google/services/apigee/resource_apigee_sync_authorization.go +++ b/google/services/apigee/resource_apigee_sync_authorization.go @@ -243,8 +243,8 @@ func resourceApigeeSyncAuthorizationDelete(d *schema.ResourceData, meta interfac func resourceApigeeSyncAuthorizationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "organizations/(?P[^/]+)/syncAuthorization", - "(?P[^/]+)", + "^organizations/(?P[^/]+)/syncAuthorization$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/apikeys/resource_apikeys_key.go b/google/services/apikeys/resource_apikeys_key.go index 7c026657464..a678d7484c9 100644 --- a/google/services/apikeys/resource_apikeys_key.go +++ b/google/services/apikeys/resource_apikeys_key.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,9 @@ func ResourceApikeysKey() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "name": { diff --git a/google/services/appengine/data_source_google_app_engine_default_service_account.go b/google/services/appengine/data_source_google_app_engine_default_service_account.go index 33695e4d259..6e643afd980 100644 --- a/google/services/appengine/data_source_google_app_engine_default_service_account.go +++ b/google/services/appengine/data_source_google_app_engine_default_service_account.go @@ -64,7 +64,7 @@ func dataSourceGoogleAppEngineDefaultServiceAccountRead(d *schema.ResourceData, sa, err := config.NewIamClient(userAgent).Projects.ServiceAccounts.Get(serviceAccountName).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Service Account %q", serviceAccountName)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Service Account %q", serviceAccountName), serviceAccountName) } d.SetId(sa.Name) diff --git a/google/services/appengine/resource_app_engine_application.go b/google/services/appengine/resource_app_engine_application.go index 7b69cd52d0b..55de4dc221e 100644 --- a/google/services/appengine/resource_app_engine_application.go +++ b/google/services/appengine/resource_app_engine_application.go @@ -34,6 +34,7 @@ func ResourceAppEngineApplication() *schema.Resource { }, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, appEngineApplicationLocationIDCustomizeDiff, ), diff --git a/google/services/appengine/resource_app_engine_application_url_dispatch_rules.go b/google/services/appengine/resource_app_engine_application_url_dispatch_rules.go index 6001b826859..22150bd60a8 100644 --- a/google/services/appengine/resource_app_engine_application_url_dispatch_rules.go +++ b/google/services/appengine/resource_app_engine_application_url_dispatch_rules.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceAppEngineApplicationUrlDispatchRules() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dispatch_rules": { Type: schema.TypeList, @@ -345,7 +350,7 @@ func resourceAppEngineApplicationUrlDispatchRulesDelete(d *schema.ResourceData, func resourceAppEngineApplicationUrlDispatchRulesImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/appengine/resource_app_engine_domain_mapping.go b/google/services/appengine/resource_app_engine_domain_mapping.go index 1321e149902..2043bc89b5e 100644 --- a/google/services/appengine/resource_app_engine_domain_mapping.go +++ b/google/services/appengine/resource_app_engine_domain_mapping.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -62,6 +63,10 @@ func ResourceAppEngineDomainMapping() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "domain_name": { Type: schema.TypeString, @@ -453,9 +458,9 @@ func resourceAppEngineDomainMappingDelete(d *schema.ResourceData, meta interface func resourceAppEngineDomainMappingImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "apps/(?P[^/]+)/domainMappings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^apps/(?P[^/]+)/domainMappings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/appengine/resource_app_engine_firewall_rule.go b/google/services/appengine/resource_app_engine_firewall_rule.go index c53246e6507..fdefd248571 100644 --- a/google/services/appengine/resource_app_engine_firewall_rule.go +++ b/google/services/appengine/resource_app_engine_firewall_rule.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceAppEngineFirewallRule() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "action": { Type: schema.TypeString, @@ -429,9 +434,9 @@ func resourceAppEngineFirewallRuleDelete(d *schema.ResourceData, meta interface{ func resourceAppEngineFirewallRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "apps/(?P[^/]+)/firewall/ingressRules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^apps/(?P[^/]+)/firewall/ingressRules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/appengine/resource_app_engine_flexible_app_version.go b/google/services/appengine/resource_app_engine_flexible_app_version.go index adb00f5a19f..10630fac89d 100644 --- a/google/services/appengine/resource_app_engine_flexible_app_version.go +++ b/google/services/appengine/resource_app_engine_flexible_app_version.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceAppEngineFlexibleAppVersion() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "liveness_check": { Type: schema.TypeList, @@ -1531,9 +1536,9 @@ func resourceAppEngineFlexibleAppVersionDelete(d *schema.ResourceData, meta inte func resourceAppEngineFlexibleAppVersionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "apps/(?P[^/]+)/services/(?P[^/]+)/versions/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^apps/(?P[^/]+)/services/(?P[^/]+)/versions/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/appengine/resource_app_engine_service_network_settings.go b/google/services/appengine/resource_app_engine_service_network_settings.go index abf2790d18d..5ca80749720 100644 --- a/google/services/appengine/resource_app_engine_service_network_settings.go +++ b/google/services/appengine/resource_app_engine_service_network_settings.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceAppEngineServiceNetworkSettings() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "network_settings": { Type: schema.TypeList, @@ -316,9 +321,9 @@ func resourceAppEngineServiceNetworkSettingsDelete(d *schema.ResourceData, meta func resourceAppEngineServiceNetworkSettingsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "apps/(?P[^/]+)/services/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^apps/(?P[^/]+)/services/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/appengine/resource_app_engine_service_split_traffic.go b/google/services/appengine/resource_app_engine_service_split_traffic.go index df5584deefd..1e3614c810c 100644 --- a/google/services/appengine/resource_app_engine_service_split_traffic.go +++ b/google/services/appengine/resource_app_engine_service_split_traffic.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceAppEngineServiceSplitTraffic() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "service": { Type: schema.TypeString, @@ -323,9 +328,9 @@ func resourceAppEngineServiceSplitTrafficDelete(d *schema.ResourceData, meta int func resourceAppEngineServiceSplitTrafficImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "apps/(?P[^/]+)/services/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^apps/(?P[^/]+)/services/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/appengine/resource_app_engine_standard_app_version.go b/google/services/appengine/resource_app_engine_standard_app_version.go index d9b276c81f3..7df4504dc4e 100644 --- a/google/services/appengine/resource_app_engine_standard_app_version.go +++ b/google/services/appengine/resource_app_engine_standard_app_version.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceAppEngineStandardAppVersion() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "deployment": { Type: schema.TypeList, @@ -997,9 +1002,9 @@ func resourceAppEngineStandardAppVersionDelete(d *schema.ResourceData, meta inte func resourceAppEngineStandardAppVersionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "apps/(?P[^/]+)/services/(?P[^/]+)/versions/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^apps/(?P[^/]+)/services/(?P[^/]+)/versions/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/artifactregistry/data_source_artifact_registry_repository.go b/google/services/artifactregistry/data_source_artifact_registry_repository.go index ab16924a058..c635450f267 100644 --- a/google/services/artifactregistry/data_source_artifact_registry_repository.go +++ b/google/services/artifactregistry/data_source_artifact_registry_repository.go @@ -40,12 +40,21 @@ func dataSourceArtifactRegistryRepositoryRead(d *schema.ResourceData, meta inter } repository_id := d.Get("repository_id").(string) - d.SetId(fmt.Sprintf("projects/%s/locations/%s/repositories/%s", project, location, repository_id)) + id := fmt.Sprintf("projects/%s/locations/%s/repositories/%s", project, location, repository_id) + d.SetId(id) err = resourceArtifactRegistryRepositoryRead(d, meta) if err != nil { return err } + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/artifactregistry/data_source_artifact_registry_repository_test.go b/google/services/artifactregistry/data_source_artifact_registry_repository_test.go index 108b1c3c6ee..32842918c6f 100644 --- a/google/services/artifactregistry/data_source_artifact_registry_repository_test.go +++ b/google/services/artifactregistry/data_source_artifact_registry_repository_test.go @@ -40,6 +40,10 @@ resource "google_artifact_registry_repository" "my-repo" { repository_id = "tf-test-my-repository%{random_suffix}" description = "example docker repository%{random_suffix}" format = "DOCKER" + labels = { + my_key = "my_val" + other_key = "other_val" + } } data "google_artifact_registry_repository" "my-repo" { diff --git a/google/services/artifactregistry/resource_artifact_registry_repository.go b/google/services/artifactregistry/resource_artifact_registry_repository.go index 972fab13298..8f84eededd9 100644 --- a/google/services/artifactregistry/resource_artifact_registry_repository.go +++ b/google/services/artifactregistry/resource_artifact_registry_repository.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceArtifactRegistryRepository() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "format": { Type: schema.TypeString, @@ -102,7 +108,11 @@ This value may not be changed after the Repository has been created.`, This field may contain up to 64 entries. Label keys and values may be no longer than 63 characters. Label keys must begin with a lowercase letter and may only contain lowercase letters, numeric characters, underscores, -and dashes.`, +and dashes. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { @@ -363,12 +373,25 @@ Repository. Upstream policies cannot be set on a standard repository.`, Computed: true, Description: `The time when the repository was created.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, Description: `The name of the repository, for example: "repo1"`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -405,12 +428,6 @@ func resourceArtifactRegistryRepositoryCreate(d *schema.ResourceData, meta inter } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandArtifactRegistryRepositoryLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } kmsKeyNameProp, err := expandArtifactRegistryRepositoryKmsKeyName(d.Get("kms_key_name"), d, config) if err != nil { return err @@ -447,6 +464,12 @@ func resourceArtifactRegistryRepositoryCreate(d *schema.ResourceData, meta inter } else if v, ok := d.GetOkExists("remote_repository_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(remoteRepositoryConfigProp)) && (ok || !reflect.DeepEqual(v, remoteRepositoryConfigProp)) { obj["remoteRepositoryConfig"] = remoteRepositoryConfigProp } + labelsProp, err := expandArtifactRegistryRepositoryEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceArtifactRegistryRepositoryEncoder(d, meta, obj) if err != nil { @@ -597,6 +620,12 @@ func resourceArtifactRegistryRepositoryRead(d *schema.ResourceData, meta interfa if err := d.Set("remote_repository_config", flattenArtifactRegistryRepositoryRemoteRepositoryConfig(res["remoteRepositoryConfig"], d, config)); err != nil { return fmt.Errorf("Error reading Repository: %s", err) } + if err := d.Set("terraform_labels", flattenArtifactRegistryRepositoryTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Repository: %s", err) + } + if err := d.Set("effective_labels", flattenArtifactRegistryRepositoryEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Repository: %s", err) + } return nil } @@ -623,12 +652,6 @@ func resourceArtifactRegistryRepositoryUpdate(d *schema.ResourceData, meta inter } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandArtifactRegistryRepositoryLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } dockerConfigProp, err := expandArtifactRegistryRepositoryDockerConfig(d.Get("docker_config"), d, config) if err != nil { return err @@ -647,6 +670,12 @@ func resourceArtifactRegistryRepositoryUpdate(d *schema.ResourceData, meta inter } else if v, ok := d.GetOkExists("virtual_repository_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, virtualRepositoryConfigProp)) { obj["virtualRepositoryConfig"] = virtualRepositoryConfigProp } + labelsProp, err := expandArtifactRegistryRepositoryEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceArtifactRegistryRepositoryEncoder(d, meta, obj) if err != nil { @@ -665,10 +694,6 @@ func resourceArtifactRegistryRepositoryUpdate(d *schema.ResourceData, meta inter updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("docker_config") { updateMask = append(updateMask, "dockerConfig") } @@ -680,6 +705,10 @@ func resourceArtifactRegistryRepositoryUpdate(d *schema.ResourceData, meta inter if d.HasChange("virtual_repository_config") { updateMask = append(updateMask, "virtualRepositoryConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -767,10 +796,10 @@ func resourceArtifactRegistryRepositoryDelete(d *schema.ResourceData, meta inter func resourceArtifactRegistryRepositoryImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/repositories/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/repositories/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -801,7 +830,18 @@ func flattenArtifactRegistryRepositoryDescription(v interface{}, d *schema.Resou } func flattenArtifactRegistryRepositoryLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenArtifactRegistryRepositoryKmsKeyName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1088,6 +1128,25 @@ func flattenArtifactRegistryRepositoryRemoteRepositoryConfigYumRepositoryPublicR return v } +func flattenArtifactRegistryRepositoryTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenArtifactRegistryRepositoryEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandArtifactRegistryRepositoryFormat(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1096,17 +1155,6 @@ func expandArtifactRegistryRepositoryDescription(v interface{}, d tpgresource.Te return v, nil } -func expandArtifactRegistryRepositoryLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandArtifactRegistryRepositoryKmsKeyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1507,6 +1555,17 @@ func expandArtifactRegistryRepositoryRemoteRepositoryConfigYumRepositoryPublicRe return v, nil } +func expandArtifactRegistryRepositoryEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceArtifactRegistryRepositoryEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { config := meta.(*transport_tpg.Config) if _, ok := d.GetOk("location"); !ok { diff --git a/google/services/artifactregistry/resource_artifact_registry_repository_generated_test.go b/google/services/artifactregistry/resource_artifact_registry_repository_generated_test.go index 5e8830df64a..35625b5ff02 100644 --- a/google/services/artifactregistry/resource_artifact_registry_repository_generated_test.go +++ b/google/services/artifactregistry/resource_artifact_registry_repository_generated_test.go @@ -49,7 +49,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryBasicExample(t ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -85,7 +85,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryDockerExample(t ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -126,7 +126,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryCmekExample(t * ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -174,7 +174,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryVirtualExample( ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -226,7 +226,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteExample(t ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -269,7 +269,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteAptExampl ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -315,7 +315,7 @@ func TestAccArtifactRegistryRepository_artifactRegistryRepositoryRemoteYumExampl ResourceName: "google_artifact_registry_repository.my-repo", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"repository_id", "location"}, + ImportStateVerifyIgnore: []string{"repository_id", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/artifactregistry/resource_artifact_registry_repository_test.go b/google/services/artifactregistry/resource_artifact_registry_repository_test.go index 7951044d348..c9b37c3d4e5 100644 --- a/google/services/artifactregistry/resource_artifact_registry_repository_test.go +++ b/google/services/artifactregistry/resource_artifact_registry_repository_test.go @@ -24,17 +24,19 @@ func TestAccArtifactRegistryRepository_update(t *testing.T) { Config: testAccArtifactRegistryRepository_update(repositoryID), }, { - ResourceName: "google_artifact_registry_repository.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_artifact_registry_repository.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccArtifactRegistryRepository_update2(repositoryID), }, { - ResourceName: "google_artifact_registry_repository.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_artifact_registry_repository.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/assuredworkloads/resource_assured_workloads_workload.go b/google/services/assuredworkloads/resource_assured_workloads_workload.go index 0ea8fa83d5a..dd21adbbd79 100644 --- a/google/services/assuredworkloads/resource_assured_workloads_workload.go +++ b/google/services/assuredworkloads/resource_assured_workloads_workload.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,9 @@ func ResourceAssuredWorkloadsWorkload() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "billing_account": { @@ -88,6 +92,12 @@ func ResourceAssuredWorkloadsWorkload() *schema.Resource { Description: "The organization for the resource", }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", + }, + "kms_settings": { Type: schema.TypeList, Optional: true, @@ -97,13 +107,6 @@ func ResourceAssuredWorkloadsWorkload() *schema.Resource { Elem: AssuredWorkloadsWorkloadKmsSettingsSchema(), }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: "Optional. Labels applied to the workload.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "provisioned_resources_parent": { Type: schema.TypeString, Optional: true, @@ -125,6 +128,13 @@ func ResourceAssuredWorkloadsWorkload() *schema.Resource { Description: "Output only. Immutable. The Workload creation timestamp.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. Labels applied to the workload.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "name": { Type: schema.TypeString, Computed: true, @@ -137,6 +147,12 @@ func ResourceAssuredWorkloadsWorkload() *schema.Resource { Description: "Output only. The resources associated with this workload. These resources will be created when creating the workload. If any of the projects already exist, the workload creation will fail. Always read only.", Elem: AssuredWorkloadsWorkloadResourcesSchema(), }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, }, } } @@ -208,8 +224,8 @@ func resourceAssuredWorkloadsWorkloadCreate(d *schema.ResourceData, meta interfa DisplayName: dcl.String(d.Get("display_name").(string)), Location: dcl.String(d.Get("location").(string)), Organization: dcl.String(d.Get("organization").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), KmsSettings: expandAssuredWorkloadsWorkloadKmsSettings(d.Get("kms_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), ProvisionedResourcesParent: dcl.String(d.Get("provisioned_resources_parent").(string)), ResourceSettings: expandAssuredWorkloadsWorkloadResourceSettingsArray(d.Get("resource_settings")), } @@ -271,8 +287,8 @@ func resourceAssuredWorkloadsWorkloadRead(d *schema.ResourceData, meta interface DisplayName: dcl.String(d.Get("display_name").(string)), Location: dcl.String(d.Get("location").(string)), Organization: dcl.String(d.Get("organization").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), KmsSettings: expandAssuredWorkloadsWorkloadKmsSettings(d.Get("kms_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), ProvisionedResourcesParent: dcl.String(d.Get("provisioned_resources_parent").(string)), ResourceSettings: expandAssuredWorkloadsWorkloadResourceSettingsArray(d.Get("resource_settings")), Name: dcl.StringOrNil(d.Get("name").(string)), @@ -315,12 +331,12 @@ func resourceAssuredWorkloadsWorkloadRead(d *schema.ResourceData, meta interface if err = d.Set("organization", res.Organization); err != nil { return fmt.Errorf("error setting organization in state: %s", err) } + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) + } if err = d.Set("kms_settings", flattenAssuredWorkloadsWorkloadKmsSettings(res.KmsSettings)); err != nil { return fmt.Errorf("error setting kms_settings in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) - } if err = d.Set("provisioned_resources_parent", res.ProvisionedResourcesParent); err != nil { return fmt.Errorf("error setting provisioned_resources_parent in state: %s", err) } @@ -330,12 +346,18 @@ func resourceAssuredWorkloadsWorkloadRead(d *schema.ResourceData, meta interface if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenAssuredWorkloadsWorkloadLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("name", res.Name); err != nil { return fmt.Errorf("error setting name in state: %s", err) } if err = d.Set("resources", flattenAssuredWorkloadsWorkloadResourcesArray(res.Resources)); err != nil { return fmt.Errorf("error setting resources in state: %s", err) } + if err = d.Set("terraform_labels", flattenAssuredWorkloadsWorkloadTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } return nil } @@ -348,8 +370,8 @@ func resourceAssuredWorkloadsWorkloadUpdate(d *schema.ResourceData, meta interfa DisplayName: dcl.String(d.Get("display_name").(string)), Location: dcl.String(d.Get("location").(string)), Organization: dcl.String(d.Get("organization").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), KmsSettings: expandAssuredWorkloadsWorkloadKmsSettings(d.Get("kms_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), ProvisionedResourcesParent: dcl.String(d.Get("provisioned_resources_parent").(string)), ResourceSettings: expandAssuredWorkloadsWorkloadResourceSettingsArray(d.Get("resource_settings")), Name: dcl.StringOrNil(d.Get("name").(string)), @@ -361,8 +383,8 @@ func resourceAssuredWorkloadsWorkloadUpdate(d *schema.ResourceData, meta interfa DisplayName: dcl.String(tpgdclresource.OldValue(d.GetChange("display_name")).(string)), Location: dcl.String(tpgdclresource.OldValue(d.GetChange("location")).(string)), Organization: dcl.String(tpgdclresource.OldValue(d.GetChange("organization")).(string)), + Labels: tpgresource.CheckStringMap(tpgdclresource.OldValue(d.GetChange("effective_labels"))), KmsSettings: expandAssuredWorkloadsWorkloadKmsSettings(tpgdclresource.OldValue(d.GetChange("kms_settings"))), - Labels: tpgresource.CheckStringMap(tpgdclresource.OldValue(d.GetChange("labels"))), ProvisionedResourcesParent: dcl.String(tpgdclresource.OldValue(d.GetChange("provisioned_resources_parent")).(string)), ResourceSettings: expandAssuredWorkloadsWorkloadResourceSettingsArray(tpgdclresource.OldValue(d.GetChange("resource_settings"))), Name: dcl.StringOrNil(tpgdclresource.OldValue(d.GetChange("name")).(string)), @@ -410,8 +432,8 @@ func resourceAssuredWorkloadsWorkloadDelete(d *schema.ResourceData, meta interfa DisplayName: dcl.String(d.Get("display_name").(string)), Location: dcl.String(d.Get("location").(string)), Organization: dcl.String(d.Get("organization").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), KmsSettings: expandAssuredWorkloadsWorkloadKmsSettings(d.Get("kms_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), ProvisionedResourcesParent: dcl.String(d.Get("provisioned_resources_parent").(string)), ResourceSettings: expandAssuredWorkloadsWorkloadResourceSettingsArray(d.Get("resource_settings")), Name: dcl.StringOrNil(d.Get("name").(string)), @@ -573,3 +595,33 @@ func flattenAssuredWorkloadsWorkloadResources(obj *assuredworkloads.WorkloadReso return transformed } + +func flattenAssuredWorkloadsWorkloadLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenAssuredWorkloadsWorkloadTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/assuredworkloads/resource_assured_workloads_workload_generated_test.go b/google/services/assuredworkloads/resource_assured_workloads_workload_generated_test.go index ba22fb0b5f3..fab46a01c3f 100644 --- a/google/services/assuredworkloads/resource_assured_workloads_workload_generated_test.go +++ b/google/services/assuredworkloads/resource_assured_workloads_workload_generated_test.go @@ -55,7 +55,7 @@ func TestAccAssuredWorkloadsWorkload_BasicHandWritten(t *testing.T) { ResourceName: "google_assured_workloads_workload.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"billing_account", "kms_settings", "resource_settings", "provisioned_resources_parent"}, + ImportStateVerifyIgnore: []string{"billing_account", "kms_settings", "resource_settings", "provisioned_resources_parent", "labels", "terraform_labels"}, }, { Config: testAccAssuredWorkloadsWorkload_BasicHandWrittenUpdate0(context), @@ -64,7 +64,7 @@ func TestAccAssuredWorkloadsWorkload_BasicHandWritten(t *testing.T) { ResourceName: "google_assured_workloads_workload.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"billing_account", "kms_settings", "resource_settings", "provisioned_resources_parent"}, + ImportStateVerifyIgnore: []string{"billing_account", "kms_settings", "resource_settings", "provisioned_resources_parent", "labels", "terraform_labels"}, }, }, }) @@ -91,7 +91,7 @@ func TestAccAssuredWorkloadsWorkload_FullHandWritten(t *testing.T) { ResourceName: "google_assured_workloads_workload.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"billing_account", "kms_settings", "resource_settings", "provisioned_resources_parent"}, + ImportStateVerifyIgnore: []string{"billing_account", "kms_settings", "resource_settings", "provisioned_resources_parent", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/beyondcorp/data_source_google_beyondcorp_app_connection.go b/google/services/beyondcorp/data_source_google_beyondcorp_app_connection.go index 0089ac003bd..558a3cba3af 100644 --- a/google/services/beyondcorp/data_source_google_beyondcorp_app_connection.go +++ b/google/services/beyondcorp/data_source_google_beyondcorp_app_connection.go @@ -40,7 +40,21 @@ func dataSourceGoogleBeyondcorpAppConnectionRead(d *schema.ResourceData, meta in return err } - d.SetId(fmt.Sprintf("projects/%s/locations/%s/appConnections/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/locations/%s/appConnections/%s", project, region, name) + d.SetId(id) - return resourceBeyondcorpAppConnectionRead(d, meta) + err = resourceBeyondcorpAppConnectionRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/beyondcorp/data_source_google_beyondcorp_app_connection_test.go b/google/services/beyondcorp/data_source_google_beyondcorp_app_connection_test.go index 8fd3b178835..aaf4679dd69 100644 --- a/google/services/beyondcorp/data_source_google_beyondcorp_app_connection_test.go +++ b/google/services/beyondcorp/data_source_google_beyondcorp_app_connection_test.go @@ -78,6 +78,9 @@ resource "google_beyondcorp_app_connection" "foo" { port = 8080 } connectors = [google_beyondcorp_app_connector.app_connector.id] + labels = { + my-label = "my-label-value" + } } data "google_beyondcorp_app_connection" "foo" { diff --git a/google/services/beyondcorp/data_source_google_beyondcorp_app_connector.go b/google/services/beyondcorp/data_source_google_beyondcorp_app_connector.go index 6fbd2af55d4..288e37d574c 100644 --- a/google/services/beyondcorp/data_source_google_beyondcorp_app_connector.go +++ b/google/services/beyondcorp/data_source_google_beyondcorp_app_connector.go @@ -40,7 +40,21 @@ func dataSourceGoogleBeyondcorpAppConnectorRead(d *schema.ResourceData, meta int return err } - d.SetId(fmt.Sprintf("projects/%s/locations/%s/appConnectors/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/locations/%s/appConnectors/%s", project, region, name) + d.SetId(id) - return resourceBeyondcorpAppConnectorRead(d, meta) + err = resourceBeyondcorpAppConnectorRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/beyondcorp/data_source_google_beyondcorp_app_connector_test.go b/google/services/beyondcorp/data_source_google_beyondcorp_app_connector_test.go index 9a6114e51a7..6d112fc375d 100644 --- a/google/services/beyondcorp/data_source_google_beyondcorp_app_connector_test.go +++ b/google/services/beyondcorp/data_source_google_beyondcorp_app_connector_test.go @@ -181,6 +181,9 @@ resource "google_beyondcorp_app_connector" "foo" { email = google_service_account.service_account.email } } + labels = { + my-label = "my-label-value" + } } data "google_beyondcorp_app_connector" "foo" { diff --git a/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway.go b/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway.go index 4960307fa1b..867ae88426c 100644 --- a/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway.go +++ b/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway.go @@ -40,7 +40,21 @@ func dataSourceGoogleBeyondcorpAppGatewayRead(d *schema.ResourceData, meta inter return err } - d.SetId(fmt.Sprintf("projects/%s/locations/%s/appGateways/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/locations/%s/appGateways/%s", project, region, name) + d.SetId(id) - return resourceBeyondcorpAppGatewayRead(d, meta) + err = resourceBeyondcorpAppGatewayRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway_test.go b/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway_test.go index 32607039db0..d9bce5dca30 100644 --- a/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway_test.go +++ b/google/services/beyondcorp/data_source_google_beyondcorp_app_gateway_test.go @@ -103,6 +103,9 @@ resource "google_beyondcorp_app_gateway" "foo" { name = "tf-test-appgateway-%{random_suffix}" type = "TCP_PROXY" host_type = "GCP_REGIONAL_MIG" + labels = { + my-label = "my-label-value" + } } data "google_beyondcorp_app_gateway" "foo" { diff --git a/google/services/beyondcorp/resource_beyondcorp_app_connection.go b/google/services/beyondcorp/resource_beyondcorp_app_connection.go index 5c928322ecf..b263bd3e09b 100644 --- a/google/services/beyondcorp/resource_beyondcorp_app_connection.go +++ b/google/services/beyondcorp/resource_beyondcorp_app_connection.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceBeyondcorpAppConnection() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "application_endpoint": { Type: schema.TypeList, @@ -121,10 +127,14 @@ for a list of possible values.`, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "region": { Type: schema.TypeString, @@ -140,6 +150,19 @@ for a list of possible values.`, https://cloud.google.com/beyondcorp/docs/reference/rest/v1/projects.locations.appConnections#type for a list of possible values.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -165,12 +188,6 @@ func resourceBeyondcorpAppConnectionCreate(d *schema.ResourceData, meta interfac } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandBeyondcorpAppConnectionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } typeProp, err := expandBeyondcorpAppConnectionType(d.Get("type"), d, config) if err != nil { return err @@ -195,6 +212,12 @@ func resourceBeyondcorpAppConnectionCreate(d *schema.ResourceData, meta interfac } else if v, ok := d.GetOkExists("gateway"); !tpgresource.IsEmptyValue(reflect.ValueOf(gatewayProp)) && (ok || !reflect.DeepEqual(v, gatewayProp)) { obj["gateway"] = gatewayProp } + labelsProp, err := expandBeyondcorpAppConnectionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{BeyondcorpBasePath}}projects/{{project}}/locations/{{region}}/appConnections?app_connection_id={{name}}") if err != nil { @@ -318,6 +341,12 @@ func resourceBeyondcorpAppConnectionRead(d *schema.ResourceData, meta interface{ if err := d.Set("gateway", flattenBeyondcorpAppConnectionGateway(res["gateway"], d, config)); err != nil { return fmt.Errorf("Error reading AppConnection: %s", err) } + if err := d.Set("terraform_labels", flattenBeyondcorpAppConnectionTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AppConnection: %s", err) + } + if err := d.Set("effective_labels", flattenBeyondcorpAppConnectionEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AppConnection: %s", err) + } return nil } @@ -344,12 +373,6 @@ func resourceBeyondcorpAppConnectionUpdate(d *schema.ResourceData, meta interfac } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandBeyondcorpAppConnectionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } applicationEndpointProp, err := expandBeyondcorpAppConnectionApplicationEndpoint(d.Get("application_endpoint"), d, config) if err != nil { return err @@ -368,6 +391,12 @@ func resourceBeyondcorpAppConnectionUpdate(d *schema.ResourceData, meta interfac } else if v, ok := d.GetOkExists("gateway"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, gatewayProp)) { obj["gateway"] = gatewayProp } + labelsProp, err := expandBeyondcorpAppConnectionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{BeyondcorpBasePath}}projects/{{project}}/locations/{{region}}/appConnections/{{name}}") if err != nil { @@ -381,10 +410,6 @@ func resourceBeyondcorpAppConnectionUpdate(d *schema.ResourceData, meta interfac updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("application_endpoint") { updateMask = append(updateMask, "applicationEndpoint") } @@ -396,6 +421,10 @@ func resourceBeyondcorpAppConnectionUpdate(d *schema.ResourceData, meta interfac if d.HasChange("gateway") { updateMask = append(updateMask, "gateway") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -491,10 +520,10 @@ func resourceBeyondcorpAppConnectionDelete(d *schema.ResourceData, meta interfac func resourceBeyondcorpAppConnectionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/appConnections/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/appConnections/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -514,7 +543,18 @@ func flattenBeyondcorpAppConnectionDisplayName(v interface{}, d *schema.Resource } func flattenBeyondcorpAppConnectionLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenBeyondcorpAppConnectionType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -609,19 +649,27 @@ func flattenBeyondcorpAppConnectionGatewayIngressPort(v interface{}, d *schema.R return v // let terraform core handle it otherwise } -func expandBeyondcorpAppConnectionDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandBeyondcorpAppConnectionLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenBeyondcorpAppConnectionTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenBeyondcorpAppConnectionEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandBeyondcorpAppConnectionDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandBeyondcorpAppConnectionType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -721,3 +769,14 @@ func expandBeyondcorpAppConnectionGatewayUri(v interface{}, d tpgresource.Terraf func expandBeyondcorpAppConnectionGatewayIngressPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandBeyondcorpAppConnectionEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/beyondcorp/resource_beyondcorp_app_connection_generated_test.go b/google/services/beyondcorp/resource_beyondcorp_app_connection_generated_test.go index 4c4e0847a94..3d9abd88d4c 100644 --- a/google/services/beyondcorp/resource_beyondcorp_app_connection_generated_test.go +++ b/google/services/beyondcorp/resource_beyondcorp_app_connection_generated_test.go @@ -49,7 +49,7 @@ func TestAccBeyondcorpAppConnection_beyondcorpAppConnectionBasicExample(t *testi ResourceName: "google_beyondcorp_app_connection.app_connection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) @@ -102,7 +102,7 @@ func TestAccBeyondcorpAppConnection_beyondcorpAppConnectionFullExample(t *testin ResourceName: "google_beyondcorp_app_connection.app_connection", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/beyondcorp/resource_beyondcorp_app_connector.go b/google/services/beyondcorp/resource_beyondcorp_app_connector.go index b7ba55e72d9..039d00b4626 100644 --- a/google/services/beyondcorp/resource_beyondcorp_app_connector.go +++ b/google/services/beyondcorp/resource_beyondcorp_app_connector.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceBeyondcorpAppConnector() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -85,10 +91,14 @@ func ResourceBeyondcorpAppConnector() *schema.Resource { Description: `An arbitrary user-provided name for the AppConnector.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "region": { Type: schema.TypeString, @@ -96,11 +106,24 @@ func ResourceBeyondcorpAppConnector() *schema.Resource { ForceNew: true, Description: `The region of the AppConnector.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "state": { Type: schema.TypeString, Computed: true, Description: `Represents the different states of a AppConnector.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -126,18 +149,18 @@ func resourceBeyondcorpAppConnectorCreate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandBeyondcorpAppConnectorLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } principalInfoProp, err := expandBeyondcorpAppConnectorPrincipalInfo(d.Get("principal_info"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("principal_info"); !tpgresource.IsEmptyValue(reflect.ValueOf(principalInfoProp)) && (ok || !reflect.DeepEqual(v, principalInfoProp)) { obj["principalInfo"] = principalInfoProp } + labelsProp, err := expandBeyondcorpAppConnectorEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{BeyondcorpBasePath}}projects/{{project}}/locations/{{region}}/appConnectors?app_connector_id={{name}}") if err != nil { @@ -255,6 +278,12 @@ func resourceBeyondcorpAppConnectorRead(d *schema.ResourceData, meta interface{} if err := d.Set("state", flattenBeyondcorpAppConnectorState(res["state"], d, config)); err != nil { return fmt.Errorf("Error reading AppConnector: %s", err) } + if err := d.Set("terraform_labels", flattenBeyondcorpAppConnectorTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AppConnector: %s", err) + } + if err := d.Set("effective_labels", flattenBeyondcorpAppConnectorEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AppConnector: %s", err) + } return nil } @@ -281,18 +310,18 @@ func resourceBeyondcorpAppConnectorUpdate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandBeyondcorpAppConnectorLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } principalInfoProp, err := expandBeyondcorpAppConnectorPrincipalInfo(d.Get("principal_info"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("principal_info"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, principalInfoProp)) { obj["principalInfo"] = principalInfoProp } + labelsProp, err := expandBeyondcorpAppConnectorEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{BeyondcorpBasePath}}projects/{{project}}/locations/{{region}}/appConnectors/{{name}}") if err != nil { @@ -306,13 +335,13 @@ func resourceBeyondcorpAppConnectorUpdate(d *schema.ResourceData, meta interface updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("principal_info") { updateMask = append(updateMask, "principalInfo") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -408,10 +437,10 @@ func resourceBeyondcorpAppConnectorDelete(d *schema.ResourceData, meta interface func resourceBeyondcorpAppConnectorImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/appConnectors/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/appConnectors/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -431,7 +460,18 @@ func flattenBeyondcorpAppConnectorDisplayName(v interface{}, d *schema.ResourceD } func flattenBeyondcorpAppConnectorLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenBeyondcorpAppConnectorPrincipalInfo(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -468,19 +508,27 @@ func flattenBeyondcorpAppConnectorState(v interface{}, d *schema.ResourceData, c return v } -func expandBeyondcorpAppConnectorDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandBeyondcorpAppConnectorLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenBeyondcorpAppConnectorTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenBeyondcorpAppConnectorEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandBeyondcorpAppConnectorDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandBeyondcorpAppConnectorPrincipalInfo(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -524,3 +572,14 @@ func expandBeyondcorpAppConnectorPrincipalInfoServiceAccount(v interface{}, d tp func expandBeyondcorpAppConnectorPrincipalInfoServiceAccountEmail(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandBeyondcorpAppConnectorEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/beyondcorp/resource_beyondcorp_app_connector_generated_test.go b/google/services/beyondcorp/resource_beyondcorp_app_connector_generated_test.go index cfed37e12d5..ca9a86e8c1e 100644 --- a/google/services/beyondcorp/resource_beyondcorp_app_connector_generated_test.go +++ b/google/services/beyondcorp/resource_beyondcorp_app_connector_generated_test.go @@ -49,7 +49,7 @@ func TestAccBeyondcorpAppConnector_beyondcorpAppConnectorBasicExample(t *testing ResourceName: "google_beyondcorp_app_connector.app_connector", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) @@ -92,7 +92,7 @@ func TestAccBeyondcorpAppConnector_beyondcorpAppConnectorFullExample(t *testing. ResourceName: "google_beyondcorp_app_connector.app_connector", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/beyondcorp/resource_beyondcorp_app_gateway.go b/google/services/beyondcorp/resource_beyondcorp_app_gateway.go index 94bd19cb8d0..b544483c3b9 100644 --- a/google/services/beyondcorp/resource_beyondcorp_app_gateway.go +++ b/google/services/beyondcorp/resource_beyondcorp_app_gateway.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,11 @@ func ResourceBeyondcorpAppGateway() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -67,11 +73,15 @@ func ResourceBeyondcorpAppGateway() *schema.Resource { Default: "HOST_TYPE_UNSPECIFIED", }, "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `Resource labels to represent user provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Resource labels to represent user provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "region": { Type: schema.TypeString, @@ -106,11 +116,25 @@ func ResourceBeyondcorpAppGateway() *schema.Resource { }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "state": { Type: schema.TypeString, Computed: true, Description: `Represents the different states of a AppGateway.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uri": { Type: schema.TypeString, Computed: true, @@ -153,10 +177,10 @@ func resourceBeyondcorpAppGatewayCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandBeyondcorpAppGatewayLabels(d.Get("labels"), d, config) + labelsProp, err := expandBeyondcorpAppGatewayEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -285,6 +309,12 @@ func resourceBeyondcorpAppGatewayRead(d *schema.ResourceData, meta interface{}) if err := d.Set("allocated_connections", flattenBeyondcorpAppGatewayAllocatedConnections(res["allocatedConnections"], d, config)); err != nil { return fmt.Errorf("Error reading AppGateway: %s", err) } + if err := d.Set("terraform_labels", flattenBeyondcorpAppGatewayTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AppGateway: %s", err) + } + if err := d.Set("effective_labels", flattenBeyondcorpAppGatewayEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AppGateway: %s", err) + } return nil } @@ -345,10 +375,10 @@ func resourceBeyondcorpAppGatewayDelete(d *schema.ResourceData, meta interface{} func resourceBeyondcorpAppGatewayImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/appGateways/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/appGateways/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -376,7 +406,18 @@ func flattenBeyondcorpAppGatewayDisplayName(v interface{}, d *schema.ResourceDat } func flattenBeyondcorpAppGatewayLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenBeyondcorpAppGatewayState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -423,6 +464,25 @@ func flattenBeyondcorpAppGatewayAllocatedConnectionsIngressPort(v interface{}, d return v // let terraform core handle it otherwise } +func flattenBeyondcorpAppGatewayTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenBeyondcorpAppGatewayEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandBeyondcorpAppGatewayType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -435,7 +495,7 @@ func expandBeyondcorpAppGatewayDisplayName(v interface{}, d tpgresource.Terrafor return v, nil } -func expandBeyondcorpAppGatewayLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandBeyondcorpAppGatewayEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/beyondcorp/resource_beyondcorp_app_gateway_generated_test.go b/google/services/beyondcorp/resource_beyondcorp_app_gateway_generated_test.go index a4e9f27f9f7..e7eb64e62a4 100644 --- a/google/services/beyondcorp/resource_beyondcorp_app_gateway_generated_test.go +++ b/google/services/beyondcorp/resource_beyondcorp_app_gateway_generated_test.go @@ -49,7 +49,7 @@ func TestAccBeyondcorpAppGateway_beyondcorpAppGatewayBasicExample(t *testing.T) ResourceName: "google_beyondcorp_app_gateway.app_gateway", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) @@ -85,7 +85,7 @@ func TestAccBeyondcorpAppGateway_beyondcorpAppGatewayFullExample(t *testing.T) { ResourceName: "google_beyondcorp_app_gateway.app_gateway", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/biglake/resource_biglake_catalog.go b/google/services/biglake/resource_biglake_catalog.go index 96f3c659bcb..bcb4ce08a2d 100644 --- a/google/services/biglake/resource_biglake_catalog.go +++ b/google/services/biglake/resource_biglake_catalog.go @@ -22,6 +22,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -43,6 +44,10 @@ func ResourceBiglakeCatalog() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -254,9 +259,9 @@ func resourceBiglakeCatalogDelete(d *schema.ResourceData, meta interface{}) erro func resourceBiglakeCatalogImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/catalogs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/catalogs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/biglake/resource_biglake_database.go b/google/services/biglake/resource_biglake_database.go index 70625bde8ec..ec1b14aa474 100644 --- a/google/services/biglake/resource_biglake_database.go +++ b/google/services/biglake/resource_biglake_database.go @@ -347,7 +347,7 @@ func resourceBiglakeDatabaseDelete(d *schema.ResourceData, meta interface{}) err func resourceBiglakeDatabaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/databases/(?P[^/]+)", + "^(?P.+)/databases/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/biglake/resource_biglake_table.go b/google/services/biglake/resource_biglake_table.go index 794f1c721ab..6dc714abd47 100644 --- a/google/services/biglake/resource_biglake_table.go +++ b/google/services/biglake/resource_biglake_table.go @@ -386,7 +386,7 @@ func resourceBiglakeTableDelete(d *schema.ResourceData, meta interface{}) error func resourceBiglakeTableImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/tables/(?P[^/]+)", + "^(?P.+)/tables/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigquery/data_source_google_bigquery_default_service_account.go b/google/services/bigquery/data_source_google_bigquery_default_service_account.go index 2e5eb0e6043..4519b76ddc1 100644 --- a/google/services/bigquery/data_source_google_bigquery_default_service_account.go +++ b/google/services/bigquery/data_source_google_bigquery_default_service_account.go @@ -45,7 +45,7 @@ func dataSourceGoogleBigqueryDefaultServiceAccountRead(d *schema.ResourceData, m projectResource, err := config.NewBigQueryClient(userAgent).Projects.GetServiceAccount(project).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "BigQuery service account not found") + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Project %q BigQuery service account", project), fmt.Sprintf("Project %q BigQuery service account", project)) } d.SetId(projectResource.Email) diff --git a/google/services/bigquery/resource_bigquery_dataset.go b/google/services/bigquery/resource_bigquery_dataset.go index 0c3ff4deb21..2032ec3f929 100644 --- a/google/services/bigquery/resource_bigquery_dataset.go +++ b/google/services/bigquery/resource_bigquery_dataset.go @@ -24,6 +24,7 @@ import ( "regexp" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -75,6 +76,11 @@ func ResourceBigQueryDataset() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dataset_id": { Type: schema.TypeString, @@ -185,10 +191,13 @@ case-sensitive. This field does not affect routine references.`, }, "labels": { Type: schema.TypeMap, - Computed: true, Optional: true, Description: `The labels associated with this dataset. You can use these to -organize and group your datasets`, +organize and group your datasets. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { @@ -232,6 +241,12 @@ LOGICAL is the default if this flag isn't specified.`, Description: `The time when this dataset was created, in milliseconds since the epoch.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -243,6 +258,13 @@ epoch.`, Description: `The date when this dataset or any of its tables was last modified, in milliseconds since the epoch.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "delete_contents_on_destroy": { Type: schema.TypeBool, Optional: true, @@ -467,12 +489,6 @@ func resourceBigQueryDatasetCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("friendly_name"); ok || !reflect.DeepEqual(v, friendlyNameProp) { obj["friendlyName"] = friendlyNameProp } - labelsProp, err := expandBigQueryDatasetLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } locationProp, err := expandBigQueryDatasetLocation(d.Get("location"), d, config) if err != nil { return err @@ -503,6 +519,12 @@ func resourceBigQueryDatasetCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("storage_billing_model"); !tpgresource.IsEmptyValue(reflect.ValueOf(storageBillingModelProp)) && (ok || !reflect.DeepEqual(v, storageBillingModelProp)) { obj["storageBillingModel"] = storageBillingModelProp } + labelsProp, err := expandBigQueryDatasetEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{BigQueryBasePath}}projects/{{project}}/datasets") if err != nil { @@ -654,6 +676,12 @@ func resourceBigQueryDatasetRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("storage_billing_model", flattenBigQueryDatasetStorageBillingModel(res["storageBillingModel"], d, config)); err != nil { return fmt.Errorf("Error reading Dataset: %s", err) } + if err := d.Set("terraform_labels", flattenBigQueryDatasetTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Dataset: %s", err) + } + if err := d.Set("effective_labels", flattenBigQueryDatasetEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Dataset: %s", err) + } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading Dataset: %s", err) } @@ -719,12 +747,6 @@ func resourceBigQueryDatasetUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("friendly_name"); ok || !reflect.DeepEqual(v, friendlyNameProp) { obj["friendlyName"] = friendlyNameProp } - labelsProp, err := expandBigQueryDatasetLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } locationProp, err := expandBigQueryDatasetLocation(d.Get("location"), d, config) if err != nil { return err @@ -755,6 +777,12 @@ func resourceBigQueryDatasetUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("storage_billing_model"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, storageBillingModelProp)) { obj["storageBillingModel"] = storageBillingModelProp } + labelsProp, err := expandBigQueryDatasetEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{BigQueryBasePath}}projects/{{project}}/datasets/{{dataset_id}}") if err != nil { @@ -835,9 +863,9 @@ func resourceBigQueryDatasetDelete(d *schema.ResourceData, meta interface{}) err func resourceBigQueryDatasetImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/datasets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/datasets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1087,7 +1115,18 @@ func flattenBigQueryDatasetFriendlyName(v interface{}, d *schema.ResourceData, c } func flattenBigQueryDatasetLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenBigQueryDatasetLastModifiedTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1146,6 +1185,25 @@ func flattenBigQueryDatasetStorageBillingModel(v interface{}, d *schema.Resource return v } +func flattenBigQueryDatasetTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenBigQueryDatasetEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandBigQueryDatasetMaxTimeTravelHours(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1428,17 +1486,6 @@ func expandBigQueryDatasetFriendlyName(v interface{}, d tpgresource.TerraformRes return v, nil } -func expandBigQueryDatasetLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandBigQueryDatasetLocation(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1477,3 +1524,14 @@ func expandBigQueryDatasetDefaultCollation(v interface{}, d tpgresource.Terrafor func expandBigQueryDatasetStorageBillingModel(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandBigQueryDatasetEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/bigquery/resource_bigquery_dataset_access.go b/google/services/bigquery/resource_bigquery_dataset_access.go index 1cdb89fe4a3..f633b0cfd6f 100644 --- a/google/services/bigquery/resource_bigquery_dataset_access.go +++ b/google/services/bigquery/resource_bigquery_dataset_access.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -156,6 +157,10 @@ func ResourceBigQueryDatasetAccess() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dataset_id": { Type: schema.TypeString, diff --git a/google/services/bigquery/resource_bigquery_dataset_generated_test.go b/google/services/bigquery/resource_bigquery_dataset_generated_test.go index 17796199283..eadf8c28030 100644 --- a/google/services/bigquery/resource_bigquery_dataset_generated_test.go +++ b/google/services/bigquery/resource_bigquery_dataset_generated_test.go @@ -47,9 +47,10 @@ func TestAccBigQueryDataset_bigqueryDatasetBasicExample(t *testing.T) { Config: testAccBigQueryDataset_bigqueryDatasetBasicExample(context), }, { - ResourceName: "google_bigquery_dataset.dataset", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.dataset", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -101,9 +102,10 @@ func TestAccBigQueryDataset_bigqueryDatasetWithMaxTimeTravelHoursExample(t *test Config: testAccBigQueryDataset_bigqueryDatasetWithMaxTimeTravelHoursExample(context), }, { - ResourceName: "google_bigquery_dataset.dataset", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.dataset", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -156,9 +158,10 @@ func TestAccBigQueryDataset_bigqueryDatasetAuthorizedDatasetExample(t *testing.T Config: testAccBigQueryDataset_bigqueryDatasetAuthorizedDatasetExample(context), }, { - ResourceName: "google_bigquery_dataset.dataset", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.dataset", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -243,9 +246,10 @@ func TestAccBigQueryDataset_bigqueryDatasetAuthorizedRoutineExample(t *testing.T Config: testAccBigQueryDataset_bigqueryDatasetAuthorizedRoutineExample(context), }, { - ResourceName: "google_bigquery_dataset.private", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.private", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -310,9 +314,10 @@ func TestAccBigQueryDataset_bigqueryDatasetCaseInsensitiveNamesExample(t *testin Config: testAccBigQueryDataset_bigqueryDatasetCaseInsensitiveNamesExample(context), }, { - ResourceName: "google_bigquery_dataset.dataset", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.dataset", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -365,9 +370,10 @@ func TestAccBigQueryDataset_bigqueryDatasetDefaultCollationSetExample(t *testing Config: testAccBigQueryDataset_bigqueryDatasetDefaultCollationSetExample(context), }, { - ResourceName: "google_bigquery_dataset.dataset", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.dataset", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/bigquery/resource_bigquery_dataset_test.go b/google/services/bigquery/resource_bigquery_dataset_test.go index 8d34bed23a1..6819b69074f 100644 --- a/google/services/bigquery/resource_bigquery_dataset_test.go +++ b/google/services/bigquery/resource_bigquery_dataset_test.go @@ -24,7 +24,11 @@ func TestAccBigQueryDataset_basic(t *testing.T) { CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccBigQueryDataset(datasetID), + Config: testAccBigQueryDataset_withoutLabels(datasetID), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_bigquery_dataset.test", "labels.%"), + resource.TestCheckNoResourceAttr("google_bigquery_dataset.test", "effective_labels.%"), + ), }, { ResourceName: "google_bigquery_dataset.test", @@ -32,16 +36,59 @@ func TestAccBigQueryDataset_basic(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBigQueryDatasetUpdated(datasetID), + Config: testAccBigQueryDataset(datasetID), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.%", "2"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.env", "foo"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.default_table_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.%", "2"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.env", "foo"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.default_table_expiration_ms", "3600000"), + ), }, { ResourceName: "google_bigquery_dataset.test", ImportState: true, ImportStateVerify: true, + // The labels field in the state is decided by the configuration. + // During importing, the configuration is unavailable, so the labels field in the state after importing is empty. + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + { + Config: testAccBigQueryDatasetUpdated(datasetID), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.%", "2"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.env", "bar"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.default_table_expiration_ms", "7200000"), + + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.%", "2"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.env", "bar"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.default_table_expiration_ms", "7200000"), + ), + }, + { + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccBigQueryDatasetUpdated2(datasetID), }, + { + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + }, + { + Config: testAccBigQueryDataset_withoutLabels(datasetID), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_bigquery_dataset.test", "labels.%"), + resource.TestCheckNoResourceAttr("google_bigquery_dataset.test", "effective_labels.%"), + ), + }, { ResourceName: "google_bigquery_dataset.test", ImportState: true, @@ -51,6 +98,90 @@ func TestAccBigQueryDataset_basic(t *testing.T) { }) } +func TestAccBigQueryDataset_withProvider5(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + oldVersion := map[string]resource.ExternalProvider{ + "google": { + VersionConstraint: "4.75.0", // a version that doesn't separate user defined labels and system labels + Source: "registry.terraform.io/hashicorp/google", + }, + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryDataset_withoutLabels(datasetID), + ExternalProviders: oldVersion, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_bigquery_dataset.test", "labels.%"), + resource.TestCheckNoResourceAttr("google_bigquery_dataset.test", "effective_labels.%"), + ), + }, + { + Config: testAccBigQueryDataset(datasetID), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.%", "2"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.env", "foo"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "labels.default_table_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.%", "2"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.env", "foo"), + resource.TestCheckResourceAttr("google_bigquery_dataset.test", "effective_labels.default_table_expiration_ms", "3600000"), + ), + }, + }, + }) +} + +func TestAccBigQueryDataset_withOutOfBandLabels(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBigQueryDatasetDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryDataset(datasetID), + Check: addOutOfBandLabels(t, datasetID), + }, + { + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"delete_contents_on_destroy", "labels", "terraform_labels"}, + }, + { + Config: testAccBigQueryDatasetUpdated(datasetID), + }, + { + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"delete_contents_on_destroy", "labels", "terraform_labels"}, + }, + { + Config: testAccBigQueryDatasetUpdated_withOutOfBandLabels(datasetID), + }, + { + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"delete_contents_on_destroy", "labels", "terraform_labels"}, + }, + }, + }) +} + func TestAccBigQueryDataset_datasetWithContents(t *testing.T) { t.Parallel() @@ -70,7 +201,7 @@ func TestAccBigQueryDataset_datasetWithContents(t *testing.T) { ResourceName: "google_bigquery_dataset.contents_test", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"delete_contents_on_destroy"}, + ImportStateVerifyIgnore: []string{"delete_contents_on_destroy", "labels", "terraform_labels"}, }, }, }) @@ -92,33 +223,37 @@ func TestAccBigQueryDataset_access(t *testing.T) { Config: testAccBigQueryDatasetWithOneAccess(datasetID), }, { - ResourceName: "google_bigquery_dataset.access_test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.access_test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccBigQueryDatasetWithTwoAccess(datasetID), }, { - ResourceName: "google_bigquery_dataset.access_test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.access_test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccBigQueryDatasetWithOneAccess(datasetID), }, { - ResourceName: "google_bigquery_dataset.access_test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.access_test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccBigQueryDatasetWithViewAccess(datasetID, otherDatasetID, otherTableID), }, { - ResourceName: "google_bigquery_dataset.access_test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.access_test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -138,9 +273,10 @@ func TestAccBigQueryDataset_regionalLocation(t *testing.T) { Config: testAccBigQueryRegionalDataset(datasetID1, "asia-south1"), }, { - ResourceName: "google_bigquery_dataset.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -183,9 +319,10 @@ func TestAccBigQueryDataset_storageBillModel(t *testing.T) { Config: testAccBigQueryDatasetStorageBillingModel(datasetID), }, { - ResourceName: "google_bigquery_dataset.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_bigquery_dataset.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -210,6 +347,38 @@ func testAccAddTable(t *testing.T, datasetID string, tableID string) resource.Te } } +func addOutOfBandLabels(t *testing.T, datasetID string) resource.TestCheckFunc { + // Not actually a check, but adds labels independently of terraform + return func(s *terraform.State) error { + config := acctest.GoogleProviderConfig(t) + + dataset, err := config.NewBigQueryClient(config.UserAgent).Datasets.Get(config.Project, datasetID).Do() + if err != nil { + return fmt.Errorf("Could not get dataset with ID %s", datasetID) + } + + dataset.Labels["outband_key"] = "test" + _, err = config.NewBigQueryClient(config.UserAgent).Datasets.Patch(config.Project, datasetID, dataset).Do() + if err != nil { + return fmt.Errorf("Could not update labele for the dataset") + } + return nil + } +} + +func testAccBigQueryDataset_withoutLabels(datasetID string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" + friendly_name = "foo" + description = "This is a foo description" + location = "EU" + default_partition_expiration_ms = 3600000 + default_table_expiration_ms = 3600000 +} +`, datasetID) +} + func testAccBigQueryDataset(datasetID string) string { return fmt.Sprintf(` resource "google_bigquery_dataset" "test" { @@ -246,6 +415,25 @@ resource "google_bigquery_dataset" "test" { `, datasetID) } +func testAccBigQueryDatasetUpdated_withOutOfBandLabels(datasetID string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" + friendly_name = "bar" + description = "This is a bar description" + location = "EU" + default_partition_expiration_ms = 7200000 + default_table_expiration_ms = 7200000 + + labels = { + env = "bar" + default_table_expiration_ms = 7200000 + outband_key = "test-update" + } +} +`, datasetID) +} + func testAccBigQueryDatasetUpdated2(datasetID string) string { return fmt.Sprintf(` resource "google_bigquery_dataset" "test" { diff --git a/google/services/bigquery/resource_bigquery_job.go b/google/services/bigquery/resource_bigquery_job.go index cb99214f27f..f32cc989ef3 100644 --- a/google/services/bigquery/resource_bigquery_job.go +++ b/google/services/bigquery/resource_bigquery_job.go @@ -24,6 +24,7 @@ import ( "regexp" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -54,6 +55,11 @@ func ResourceBigQueryJob() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "copy": { Type: schema.TypeList, @@ -309,11 +315,15 @@ or of the form 'projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} Description: `Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `The labels associated with this job. You can use these to organize and group your jobs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `The labels associated with this job. You can use these to organize and group your jobs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "load": { Type: schema.TypeList, @@ -878,11 +888,25 @@ Creation, truncation and append actions occur as one atomic update upon job comp }, ExactlyOneOf: []string{"query", "load", "copy", "extract"}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "job_type": { Type: schema.TypeString, Computed: true, Description: `The type of the job.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "job_id": { Type: schema.TypeString, @@ -1186,12 +1210,12 @@ func resourceBigQueryJobDelete(d *schema.ResourceData, meta interface{}) error { func resourceBigQueryJobImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/jobs/(?P[^/]+)/location/(?P[^/]+)", - "projects/(?P[^/]+)/jobs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/jobs/(?P[^/]+)/location/(?P[^/]+)$", + "^projects/(?P[^/]+)/jobs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1233,6 +1257,10 @@ func flattenBigQueryJobConfiguration(v interface{}, d *schema.ResourceData, conf flattenBigQueryJobConfigurationCopy(original["copy"], d, config) transformed["extract"] = flattenBigQueryJobConfigurationExtract(original["extract"], d, config) + transformed["terraform_labels"] = + flattenBigQueryJobConfigurationTerraformLabels(original["labels"], d, config) + transformed["effective_labels"] = + flattenBigQueryJobConfigurationEffectiveLabels(original["labels"], d, config) return []interface{}{transformed} } func flattenBigQueryJobConfigurationJobType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1244,7 +1272,18 @@ func flattenBigQueryJobConfigurationJobTimeoutMs(v interface{}, d *schema.Resour } func flattenBigQueryJobConfigurationLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenBigQueryJobConfigurationQuery(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1919,6 +1958,25 @@ func flattenBigQueryJobConfigurationExtractSourceModelModelId(v interface{}, d * return v } +func flattenBigQueryJobConfigurationTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenBigQueryJobConfigurationEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenBigQueryJobJobReference(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -2040,13 +2098,6 @@ func expandBigQueryJobConfiguration(v interface{}, d tpgresource.TerraformResour transformed["jobTimeoutMs"] = transformedJobTimeoutMs } - transformedLabels, err := expandBigQueryJobConfigurationLabels(d.Get("labels"), d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["labels"] = transformedLabels - } - transformedQuery, err := expandBigQueryJobConfigurationQuery(d.Get("query"), d, config) if err != nil { return nil, err @@ -2075,6 +2126,13 @@ func expandBigQueryJobConfiguration(v interface{}, d tpgresource.TerraformResour transformed["extract"] = transformedExtract } + transformedEffectiveLabels, err := expandBigQueryJobConfigurationEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEffectiveLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["labels"] = transformedEffectiveLabels + } + return transformed, nil } @@ -2086,17 +2144,6 @@ func expandBigQueryJobConfigurationJobTimeoutMs(v interface{}, d tpgresource.Ter return v, nil } -func expandBigQueryJobConfigurationLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandBigQueryJobConfigurationQuery(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -3154,6 +3201,17 @@ func expandBigQueryJobConfigurationExtractSourceModelModelId(v interface{}, d tp return v, nil } +func expandBigQueryJobConfigurationEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func expandBigQueryJobJobReference(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { transformed := make(map[string]interface{}) transformedJobId, err := expandBigQueryJobJobReferenceJobId(d.Get("job_id"), d, config) diff --git a/google/services/bigquery/resource_bigquery_job_generated_test.go b/google/services/bigquery/resource_bigquery_job_generated_test.go index 1689301deec..0ef144ca057 100644 --- a/google/services/bigquery/resource_bigquery_job_generated_test.go +++ b/google/services/bigquery/resource_bigquery_job_generated_test.go @@ -44,7 +44,7 @@ func TestAccBigQueryJob_bigqueryJobQueryExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -110,7 +110,7 @@ func TestAccBigQueryJob_bigqueryJobQueryTableReferenceExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "query.0.default_dataset.0.dataset_id", "query.0.destination_table.0.table_id", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "query.0.default_dataset.0.dataset_id", "query.0.destination_table.0.table_id", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -178,7 +178,7 @@ func TestAccBigQueryJob_bigqueryJobLoadExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -246,7 +246,7 @@ func TestAccBigQueryJob_bigqueryJobLoadGeojsonExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -333,7 +333,7 @@ func TestAccBigQueryJob_bigqueryJobLoadParquetExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -416,7 +416,7 @@ func TestAccBigQueryJob_bigqueryJobLoadTableReferenceExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "load.0.destination_table.0.table_id", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "load.0.destination_table.0.table_id", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -482,7 +482,7 @@ func TestAccBigQueryJob_bigqueryJobCopyExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -637,7 +637,7 @@ func TestAccBigQueryJob_bigqueryJobCopyTableReferenceExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "copy.0.destination_table.0.table_id", "copy.0.source_tables.0.table_id", "copy.0.source_tables.1.table_id", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "copy.0.destination_table.0.table_id", "copy.0.source_tables.0.table_id", "copy.0.source_tables.1.table_id", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -785,7 +785,7 @@ func TestAccBigQueryJob_bigqueryJobExtractExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) @@ -869,7 +869,7 @@ func TestAccBigQueryJob_bigqueryJobExtractTableReferenceExample(t *testing.T) { ResourceName: "google_bigquery_job.job", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "extract.0.source_table.0.table_id", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "extract.0.source_table.0.table_id", "status.0.state", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/bigquery/resource_bigquery_job_test.go b/google/services/bigquery/resource_bigquery_job_test.go index ec3e34abce0..2dfff8156e9 100644 --- a/google/services/bigquery/resource_bigquery_job_test.go +++ b/google/services/bigquery/resource_bigquery_job_test.go @@ -34,7 +34,7 @@ func TestAccBigQueryJob_withLocation(t *testing.T) { ImportStateId: importID, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "status.0.state"}, + ImportStateVerifyIgnore: []string{"etag", "status.0.state", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/bigquery/resource_bigquery_routine.go b/google/services/bigquery/resource_bigquery_routine.go index f80c78dd538..a8a657a6dd5 100644 --- a/google/services/bigquery/resource_bigquery_routine.go +++ b/google/services/bigquery/resource_bigquery_routine.go @@ -24,6 +24,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -52,6 +53,10 @@ func ResourceBigQueryRoutine() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "definition_body": { Type: schema.TypeString, @@ -72,6 +77,13 @@ If language=SQL, it is the substring inside (but excluding) the parentheses.`, Description: `The ID of the the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.`, }, + "routine_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"SCALAR_FUNCTION", "PROCEDURE", "TABLE_VALUED_FUNCTION"}), + Description: `The type of routine. Possible values: ["SCALAR_FUNCTION", "PROCEDURE", "TABLE_VALUED_FUNCTION"]`, + }, "arguments": { Type: schema.TypeList, Optional: true, @@ -164,13 +176,6 @@ d the order of values or replaced STRUCT field type with RECORD field type, we c cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.`, }, - "routine_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"SCALAR_FUNCTION", "PROCEDURE", "TABLE_VALUED_FUNCTION", ""}), - Description: `The type of routine. Possible values: ["SCALAR_FUNCTION", "PROCEDURE", "TABLE_VALUED_FUNCTION"]`, - }, "creation_time": { Type: schema.TypeInt, Computed: true, @@ -555,9 +560,9 @@ func resourceBigQueryRoutineDelete(d *schema.ResourceData, meta interface{}) err func resourceBigQueryRoutineImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/datasets/(?P[^/]+)/routines/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/datasets/(?P[^/]+)/routines/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigquery/resource_bigquery_table.go b/google/services/bigquery/resource_bigquery_table.go index 7223dbe5c2d..be0d6175be4 100644 --- a/google/services/bigquery/resource_bigquery_table.go +++ b/google/services/bigquery/resource_bigquery_table.go @@ -396,6 +396,32 @@ func resourceBigQueryTableSchemaCustomizeDiff(_ context.Context, d *schema.Resou return resourceBigQueryTableSchemaCustomizeDiffFunc(d) } +func validateBigQueryTableSchema(v interface{}, k string) (warnings []string, errs []error) { + if v == nil { + return + } + + if _, e := validation.StringIsJSON(v, k); e != nil { + errs = append(errs, e...) + return + } + + var jsonList []interface{} + if err := json.Unmarshal([]byte(v.(string)), &jsonList); err != nil { + errs = append(errs, fmt.Errorf("\"schema\" is not a JSON array: %s", err)) + return + } + + for _, v := range jsonList { + if v == nil { + errs = append(errs, errors.New("\"schema\" contains a nil element")) + return + } + } + + return +} + func ResourceBigQueryTable() *schema.Resource { return &schema.Resource{ Create: resourceBigQueryTableCreate, @@ -406,7 +432,9 @@ func ResourceBigQueryTable() *schema.Resource { State: resourceBigQueryTableImport, }, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, resourceBigQueryTableSchemaCustomizeDiff, + tpgresource.SetLabelsDiff, ), Schema: map[string]*schema.Schema{ // TableId: [Required] The ID of the table. The ID must contain only @@ -510,7 +538,7 @@ func ResourceBigQueryTable() *schema.Resource { Optional: true, Computed: true, ForceNew: true, - ValidateFunc: validation.StringIsJSON, + ValidateFunc: validateBigQueryTableSchema, StateFunc: func(v interface{}) string { json, _ := structure.NormalizeJsonString(v) return json @@ -774,26 +802,43 @@ func ResourceBigQueryTable() *schema.Resource { // start with a letter and each label in the list must have a different // key. "labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A mapping of labels to assign to the resource. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + "terraform_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, - Description: `A mapping of labels to assign to the resource.`, }, - // Schema: [Optional] Describes the schema of this table. + // Schema is mutually exclusive with View and Materialized View. "schema": { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validation.StringIsJSON, + ValidateFunc: validateBigQueryTableSchema, StateFunc: func(v interface{}) string { json, _ := structure.NormalizeJsonString(v) return json }, DiffSuppressFunc: bigQueryTableSchemaDiffSuppress, Description: `A JSON schema for the table.`, + ConflictsWith: []string{"view", "materialized_view"}, }, // View: [Optional] If specified, configures this table as a view. + // View is mutually exclusive with Schema and Materialized View. "view": { Type: schema.TypeList, Optional: true, @@ -820,9 +865,11 @@ func ResourceBigQueryTable() *schema.Resource { }, }, }, + ConflictsWith: []string{"schema", "materialized_view"}, }, // Materialized View: [Optional] If specified, configures this table as a materialized view. + // Materialized View is mutually exclusive with Schema and View. "materialized_view": { Type: schema.TypeList, Optional: true, @@ -867,6 +914,7 @@ func ResourceBigQueryTable() *schema.Resource { }, }, }, + ConflictsWith: []string{"schema", "view"}, }, // TimePartitioning: [Experimental] If specified, configures time-based @@ -1258,7 +1306,7 @@ func resourceTable(d *schema.ResourceData, meta interface{}) (*bigquery.Table, e } } - if v, ok := d.GetOk("labels"); ok { + if v, ok := d.GetOk("effective_labels"); ok { labels := map[string]string{} for k, v := range v.(map[string]interface{}) { @@ -1330,41 +1378,16 @@ func resourceBigQueryTableCreate(d *schema.ResourceData, meta interface{}) error datasetID := d.Get("dataset_id").(string) - if table.View != nil && table.Schema != nil { - - log.Printf("[INFO] Removing schema from table definition because big query does not support setting schema on view creation") - schemaBack := table.Schema - table.Schema = nil - - log.Printf("[INFO] Creating BigQuery table: %s without schema", table.TableReference.TableId) - - res, err := config.NewBigQueryClient(userAgent).Tables.Insert(project, datasetID, table).Do() - if err != nil { - return err - } - - log.Printf("[INFO] BigQuery table %s has been created", res.Id) - d.SetId(fmt.Sprintf("projects/%s/datasets/%s/tables/%s", res.TableReference.ProjectId, res.TableReference.DatasetId, res.TableReference.TableId)) - - table.Schema = schemaBack - log.Printf("[INFO] Updating BigQuery table: %s with schema", table.TableReference.TableId) - if _, err = config.NewBigQueryClient(userAgent).Tables.Update(project, datasetID, res.TableReference.TableId, table).Do(); err != nil { - return err - } + log.Printf("[INFO] Creating BigQuery table: %s", table.TableReference.TableId) - log.Printf("[INFO] BigQuery table %s has been update with schema", res.Id) - } else { - log.Printf("[INFO] Creating BigQuery table: %s", table.TableReference.TableId) - - res, err := config.NewBigQueryClient(userAgent).Tables.Insert(project, datasetID, table).Do() - if err != nil { - return err - } - - log.Printf("[INFO] BigQuery table %s has been created", res.Id) - d.SetId(fmt.Sprintf("projects/%s/datasets/%s/tables/%s", res.TableReference.ProjectId, res.TableReference.DatasetId, res.TableReference.TableId)) + res, err := config.NewBigQueryClient(userAgent).Tables.Insert(project, datasetID, table).Do() + if err != nil { + return err } + log.Printf("[INFO] BigQuery table %s has been created", res.Id) + d.SetId(fmt.Sprintf("projects/%s/datasets/%s/tables/%s", res.TableReference.ProjectId, res.TableReference.DatasetId, res.TableReference.TableId)) + return resourceBigQueryTableRead(d, meta) } @@ -1405,9 +1428,15 @@ func resourceBigQueryTableRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("max_staleness", res.MaxStaleness); err != nil { return fmt.Errorf("Error setting max_staleness: %s", err) } - if err := d.Set("labels", res.Labels); err != nil { + if err := tpgresource.SetLabels(res.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(res.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if err := d.Set("creation_time", res.CreationTime); err != nil { return fmt.Errorf("Error setting creation_time: %s", err) } diff --git a/google/services/bigquery/resource_bigquery_table_test.go b/google/services/bigquery/resource_bigquery_table_test.go index d6ce371dbc2..e6d6a536ba1 100644 --- a/google/services/bigquery/resource_bigquery_table_test.go +++ b/google/services/bigquery/resource_bigquery_table_test.go @@ -456,22 +456,8 @@ func TestAccBigQueryTable_WithViewAndSchema(t *testing.T) { CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccBigQueryTableWithViewAndSchema(datasetID, tableID, "table description1"), - }, - { - ResourceName: "google_bigquery_table.test", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_protection"}, - }, - { - Config: testAccBigQueryTableWithViewAndSchema(datasetID, tableID, "table description2"), - }, - { - ResourceName: "google_bigquery_table.test", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_protection"}, + Config: testAccBigQueryTableWithViewAndSchema(datasetID, tableID, "table description"), + ExpectError: regexp.MustCompile("\"view\": conflicts with schema"), }, }, }) @@ -609,6 +595,51 @@ func TestAccBigQueryTable_MaterializedView_NonIncremental_basic(t *testing.T) { }) } +func TestAccBigQueryTable_MaterializedView_WithSchema(t *testing.T) { + t.Parallel() + // Pending VCR support in https://github.com/hashicorp/terraform-provider-google/issues/15427. + acctest.SkipIfVcr(t) + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + materializedViewID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + query := fmt.Sprintf("SELECT some_int FROM `%s.%s`", datasetID, tableID) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableWithMatViewAndSchema(datasetID, tableID, materializedViewID, query), + ExpectError: regexp.MustCompile("\"materialized_view\": conflicts with schema"), + }, + }, + }) +} + +func TestAccBigQueryTable_MaterializedView_WithView(t *testing.T) { + t.Parallel() + acctest.SkipIfVcr(t) + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + materializedViewID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + query := fmt.Sprintf("SELECT some_int FROM `%s.%s`", datasetID, tableID) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableWithMatViewAndView(datasetID, tableID, materializedViewID, query), + ExpectError: regexp.MustCompile("\"materialized_view\": conflicts with view"), + }, + }, + }) +} + func TestAccBigQueryExternalDataTable_parquet(t *testing.T) { t.Parallel() @@ -905,6 +936,36 @@ func TestAccBigQueryExternalDataTable_CSV(t *testing.T) { }) } +func TestAccBigQueryExternalDataTable_CSV_WithSchema_InvalidSchemas(t *testing.T) { + t.Parallel() + + bucketName := acctest.TestBucketName(t) + objectName := fmt.Sprintf("tf_test_%s.csv", acctest.RandString(t, 10)) + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableFromGCSWithExternalDataConfigSchema(datasetID, tableID, bucketName, objectName, TEST_SIMPLE_CSV, TEST_INVALID_SCHEMA_NOT_JSON), + ExpectError: regexp.MustCompile("contains an invalid JSON"), + }, + { + Config: testAccBigQueryTableFromGCSWithExternalDataConfigSchema(datasetID, tableID, bucketName, objectName, TEST_SIMPLE_CSV, TEST_INVALID_SCHEMA_NOT_JSON_LIST), + ExpectError: regexp.MustCompile("\"schema\" is not a JSON array"), + }, + { + Config: testAccBigQueryTableFromGCSWithExternalDataConfigSchema(datasetID, tableID, bucketName, objectName, TEST_SIMPLE_CSV, TEST_INVALID_SCHEMA_JSON_LIST_WITH_NULL_ELEMENT), + ExpectError: regexp.MustCompile("\"schema\" contains a nil element"), + }, + }, + }) +} + func TestAccBigQueryExternalDataTable_CSV_WithSchemaAndConnectionID_UpdateNoConnectionID(t *testing.T) { t.Parallel() @@ -1097,7 +1158,7 @@ func TestAccBigQueryDataTable_jsonEquivalency(t *testing.T) { ResourceName: "google_bigquery_table.test", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection"}, + ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection", "labels", "terraform_labels"}, }, { Config: testAccBigQueryTable_jsonEqModeRemoved(datasetID, tableID), @@ -1106,7 +1167,7 @@ func TestAccBigQueryDataTable_jsonEquivalency(t *testing.T) { ResourceName: "google_bigquery_table.test", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection"}, + ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection", "labels", "terraform_labels"}, }, }, }) @@ -1156,7 +1217,7 @@ func TestAccBigQueryDataTable_expandArray(t *testing.T) { ResourceName: "google_bigquery_table.test", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection"}, + ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection", "labels", "terraform_labels"}, }, { Config: testAccBigQueryTable_arrayExpanded(datasetID, tableID), @@ -1165,7 +1226,7 @@ func TestAccBigQueryDataTable_expandArray(t *testing.T) { ResourceName: "google_bigquery_table.test", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection"}, + ImportStateVerifyIgnore: []string{"etag", "last_modified_time", "deletion_protection", "labels", "terraform_labels"}, }, }, }) @@ -1189,7 +1250,7 @@ func TestAccBigQueryTable_allowDestroy(t *testing.T) { ResourceName: "google_bigquery_table.test", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_protection"}, + ImportStateVerifyIgnore: []string{"deletion_protection", "labels", "terraform_labels"}, }, { Config: testAccBigQueryTable_noAllowDestroy(datasetID, tableID), @@ -1372,6 +1433,35 @@ func TestAccBigQueryTable_Update_SchemaWithPolicyTagsToEmptyPolicyTagNames(t *te }) } +func TestAccBigQueryTable_invalidSchemas(t *testing.T) { + t.Parallel() + // Pending VCR support in https://github.com/hashicorp/terraform-provider-google/issues/15427. + acctest.SkipIfVcr(t) + + datasetID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + tableID := fmt.Sprintf("tf_test_%s", acctest.RandString(t, 10)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckBigQueryTableDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccBigQueryTableWithSchema(datasetID, tableID, TEST_INVALID_SCHEMA_NOT_JSON), + ExpectError: regexp.MustCompile("contains an invalid JSON"), + }, + { + Config: testAccBigQueryTableWithSchema(datasetID, tableID, TEST_INVALID_SCHEMA_NOT_JSON_LIST), + ExpectError: regexp.MustCompile("\"schema\" is not a JSON array"), + }, + { + Config: testAccBigQueryTableWithSchema(datasetID, tableID, TEST_INVALID_SCHEMA_JSON_LIST_WITH_NULL_ELEMENT), + ExpectError: regexp.MustCompile("\"schema\" contains a nil element"), + }, + }, + }) +} + func testAccCheckBigQueryExtData(t *testing.T, expectedQuoteChar string) resource.TestCheckFunc { return func(s *terraform.State) error { for _, rs := range s.RootModule().Resources { @@ -2086,7 +2176,57 @@ resource "google_bigquery_table" "mv_test" { `, datasetID, tableID, mViewID, enable_refresh, refresh_interval, query) } -func testAccBigQueryTableWithMatViewNonIncremental_basic(datasetID, tableID, mViewID, query, maxStaleness string) string { +func testAccBigQueryTableWithMatViewAndSchema(datasetID, tableID, mViewID, query string) string { + return fmt.Sprintf(` +resource "google_bigquery_dataset" "test" { + dataset_id = "%s" +} + +resource "google_bigquery_table" "test" { + deletion_protection = false + table_id = "%s" + dataset_id = google_bigquery_dataset.test.dataset_id + + schema = <[^/]+)/locations/(?P[^/]+)/dataExchanges/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/dataExchanges/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigqueryanalyticshub/resource_bigquery_analytics_hub_listing.go b/google/services/bigqueryanalyticshub/resource_bigquery_analytics_hub_listing.go index f33262c3cdc..ff45840d781 100644 --- a/google/services/bigqueryanalyticshub/resource_bigquery_analytics_hub_listing.go +++ b/google/services/bigqueryanalyticshub/resource_bigquery_analytics_hub_listing.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceBigqueryAnalyticsHubListing() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "bigquery_dataset": { Type: schema.TypeList, @@ -572,9 +577,9 @@ func resourceBigqueryAnalyticsHubListingDelete(d *schema.ResourceData, meta inte func resourceBigqueryAnalyticsHubListingImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/dataExchanges/(?P[^/]+)/listings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/dataExchanges/(?P[^/]+)/listings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigqueryconnection/resource_bigquery_connection.go b/google/services/bigqueryconnection/resource_bigquery_connection.go index bbc851a07c6..010c8f2f243 100644 --- a/google/services/bigqueryconnection/resource_bigquery_connection.go +++ b/google/services/bigqueryconnection/resource_bigquery_connection.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceBigqueryConnectionConnection() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "aws": { Type: schema.TypeList, @@ -649,9 +654,9 @@ func resourceBigqueryConnectionConnectionDelete(d *schema.ResourceData, meta int func resourceBigqueryConnectionConnectionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/connections/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/connections/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigquerydatapolicy/resource_bigquery_datapolicy_data_policy.go b/google/services/bigquerydatapolicy/resource_bigquery_datapolicy_data_policy.go index c73d0a939c5..58b00c3c20c 100644 --- a/google/services/bigquerydatapolicy/resource_bigquery_datapolicy_data_policy.go +++ b/google/services/bigquerydatapolicy/resource_bigquery_datapolicy_data_policy.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceBigqueryDatapolicyDataPolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "data_policy_id": { Type: schema.TypeString, @@ -377,9 +382,9 @@ func resourceBigqueryDatapolicyDataPolicyDelete(d *schema.ResourceData, meta int func resourceBigqueryDatapolicyDataPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/dataPolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/dataPolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigquerydatatransfer/resource_bigquery_data_transfer_config.go b/google/services/bigquerydatatransfer/resource_bigquery_data_transfer_config.go index 408250605a8..dbb6304ef4f 100644 --- a/google/services/bigquerydatatransfer/resource_bigquery_data_transfer_config.go +++ b/google/services/bigquerydatatransfer/resource_bigquery_data_transfer_config.go @@ -114,6 +114,7 @@ func ResourceBigqueryDataTransferConfig() *schema.Resource { CustomizeDiff: customdiff.All( sensitiveParamCustomizeDiff, paramsCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ diff --git a/google/services/bigqueryreservation/resource_bigquery_bi_reservation.go b/google/services/bigqueryreservation/resource_bigquery_bi_reservation.go index 6fc99c95de8..c4f36be3d3f 100644 --- a/google/services/bigqueryreservation/resource_bigquery_bi_reservation.go +++ b/google/services/bigqueryreservation/resource_bigquery_bi_reservation.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceBigqueryReservationBiReservation() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -370,9 +375,9 @@ func resourceBigqueryReservationBiReservationDelete(d *schema.ResourceData, meta func resourceBigqueryReservationBiReservationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/biReservation", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/biReservation$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigqueryreservation/resource_bigquery_capacity_commitment.go b/google/services/bigqueryreservation/resource_bigquery_capacity_commitment.go index 9a5a010dfb6..27f9f5f5e83 100644 --- a/google/services/bigqueryreservation/resource_bigquery_capacity_commitment.go +++ b/google/services/bigqueryreservation/resource_bigquery_capacity_commitment.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -54,6 +55,10 @@ func ResourceBigqueryReservationCapacityCommitment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "plan": { Type: schema.TypeString, @@ -403,9 +408,9 @@ func resourceBigqueryReservationCapacityCommitmentDelete(d *schema.ResourceData, func resourceBigqueryReservationCapacityCommitmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/capacityCommitments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/capacityCommitments/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigqueryreservation/resource_bigquery_reservation.go b/google/services/bigqueryreservation/resource_bigquery_reservation.go index 5f90fc14fa2..59e3959c49a 100644 --- a/google/services/bigqueryreservation/resource_bigquery_reservation.go +++ b/google/services/bigqueryreservation/resource_bigquery_reservation.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceBigqueryReservationReservation() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -430,9 +435,9 @@ func resourceBigqueryReservationReservationDelete(d *schema.ResourceData, meta i func resourceBigqueryReservationReservationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/reservations/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/reservations/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigqueryreservation/resource_bigquery_reservation_assignment.go b/google/services/bigqueryreservation/resource_bigquery_reservation_assignment.go index b0bb5d2fd30..2e38f072c29 100644 --- a/google/services/bigqueryreservation/resource_bigquery_reservation_assignment.go +++ b/google/services/bigqueryreservation/resource_bigquery_reservation_assignment.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,9 @@ func ResourceBigqueryReservationAssignment() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "assignee": { diff --git a/google/services/bigtable/resource_bigtable_app_profile.go b/google/services/bigtable/resource_bigtable_app_profile.go index 64d2dd63257..38358cb0244 100644 --- a/google/services/bigtable/resource_bigtable_app_profile.go +++ b/google/services/bigtable/resource_bigtable_app_profile.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/bigtableadmin/v2" @@ -48,6 +49,10 @@ func ResourceBigtableAppProfile() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "app_profile_id": { Type: schema.TypeString, @@ -427,9 +432,9 @@ func resourceBigtableAppProfileDelete(d *schema.ResourceData, meta interface{}) func resourceBigtableAppProfileImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/instances/(?P[^/]+)/appProfiles/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/instances/(?P[^/]+)/appProfiles/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/bigtable/resource_bigtable_instance.go b/google/services/bigtable/resource_bigtable_instance.go index 6d1415c62ce..3a049760ceb 100644 --- a/google/services/bigtable/resource_bigtable_instance.go +++ b/google/services/bigtable/resource_bigtable_instance.go @@ -37,8 +37,10 @@ func ResourceBigtableInstance() *schema.Resource { }, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, resourceBigtableInstanceClusterReorderTypeList, resourceBigtableInstanceUniqueClusterID, + tpgresource.SetLabelsDiff, ), SchemaVersion: 1, @@ -161,10 +163,27 @@ func ResourceBigtableInstance() *schema.Resource { }, "labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A mapping of labels to assign to the resource. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, - Description: `A mapping of labels to assign to the resource.`, }, "project": { @@ -203,8 +222,8 @@ func resourceBigtableInstanceCreate(d *schema.ResourceData, meta interface{}) er } conf.DisplayName = displayName.(string) - if _, ok := d.GetOk("labels"); ok { - conf.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + conf.Labels = tpgresource.ExpandEffectiveLabels(d) } switch d.Get("instance_type").(string) { @@ -312,9 +331,15 @@ func resourceBigtableInstanceRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("display_name", instance.DisplayName); err != nil { return fmt.Errorf("Error setting display_name: %s", err) } - if err := d.Set("labels", instance.Labels); err != nil { + if err := tpgresource.SetLabels(instance.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(instance.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", instance.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } // Don't set instance_type: we don't want to detect drift on it because it can // change under-the-hood. @@ -350,8 +375,8 @@ func resourceBigtableInstanceUpdate(d *schema.ResourceData, meta interface{}) er } conf.DisplayName = displayName.(string) - if d.HasChange("labels") { - conf.Labels = tpgresource.ExpandLabels(d) + if d.HasChange("effective_labels") { + conf.Labels = tpgresource.ExpandEffectiveLabels(d) } switch d.Get("instance_type").(string) { diff --git a/google/services/bigtable/resource_bigtable_instance_test.go b/google/services/bigtable/resource_bigtable_instance_test.go index 7b9b8a0c30c..0d217c0151f 100644 --- a/google/services/bigtable/resource_bigtable_instance_test.go +++ b/google/services/bigtable/resource_bigtable_instance_test.go @@ -91,7 +91,7 @@ func TestAccBigtableInstance_cluster(t *testing.T) { ResourceName: "google_bigtable_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_protection", "instance_type", "cluster"}, // we don't read instance type back + ImportStateVerifyIgnore: []string{"deletion_protection", "instance_type", "cluster", "labels", "terraform_labels"}, // we don't read instance type back }, { Config: testAccBigtableInstance_clusterReordered(instanceName, 5), @@ -110,7 +110,7 @@ func TestAccBigtableInstance_cluster(t *testing.T) { ResourceName: "google_bigtable_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_protection", "instance_type", "cluster"}, // we don't read instance type back + ImportStateVerifyIgnore: []string{"deletion_protection", "instance_type", "cluster", "labels", "terraform_labels"}, // we don't read instance type back }, }, }) @@ -430,7 +430,7 @@ func TestAccBigtableInstance_MultipleClustersSameID(t *testing.T) { ResourceName: "google_bigtable_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"deletion_protection", "instance_type"}, // we don't read instance type back + ImportStateVerifyIgnore: []string{"deletion_protection", "instance_type", "labels", "terraform_labels"}, // we don't read instance type back }, { Config: testAccBigtableInstance_multipleClustersSameID(instanceName), @@ -646,7 +646,7 @@ resource "google_bigtable_instance" "instance" { deletion_protection = false labels = { - env = "default" + env = "test" } } `, instanceName, instanceName, numNodes, instanceName, numNodes, instanceName, numNodes, instanceName, numNodes, instanceName, numNodes) diff --git a/google/services/bigtable/resource_bigtable_table.go b/google/services/bigtable/resource_bigtable_table.go index 5272c99056e..8a933bd4ea9 100644 --- a/google/services/bigtable/resource_bigtable_table.go +++ b/google/services/bigtable/resource_bigtable_table.go @@ -9,6 +9,7 @@ import ( "time" "cloud.google.com/go/bigtable" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -34,6 +35,9 @@ func ResourceBigtableTable() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), // ---------------------------------------------------------------------- // IMPORTANT: Do not add any additional ForceNew fields to this resource. // Destroying/recreating tables can lead to data loss for users. diff --git a/google/services/billing/data_source_google_billing_account.go b/google/services/billing/data_source_google_billing_account.go index 7372503fc6f..6584467e707 100644 --- a/google/services/billing/data_source_google_billing_account.go +++ b/google/services/billing/data_source_google_billing_account.go @@ -66,7 +66,7 @@ func dataSourceBillingAccountRead(d *schema.ResourceData, meta interface{}) erro if v, ok := d.GetOk("billing_account"); ok { resp, err := config.NewBillingClient(userAgent).BillingAccounts.Get(CanonicalBillingAccountName(v.(string))).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Billing Account Not Found : %s", v)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Billing Account Not Found : %s", v), CanonicalBillingAccountName(v.(string))) } if openOk && resp.Open != open.(bool) { diff --git a/google/services/billing/resource_billing_budget.go b/google/services/billing/resource_billing_budget.go index c90d7b967bd..193a393f155 100644 --- a/google/services/billing/resource_billing_budget.go +++ b/google/services/billing/resource_billing_budget.go @@ -698,9 +698,9 @@ func resourceBillingBudgetDelete(d *schema.ResourceData, meta interface{}) error func resourceBillingBudgetImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "billingAccounts/(?P[^/]+)/budgets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^billingAccounts/(?P[^/]+)/budgets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/binaryauthorization/resource_binary_authorization_attestor.go b/google/services/binaryauthorization/resource_binary_authorization_attestor.go index 186e2b0b18f..badeb570c3c 100644 --- a/google/services/binaryauthorization/resource_binary_authorization_attestor.go +++ b/google/services/binaryauthorization/resource_binary_authorization_attestor.go @@ -24,6 +24,7 @@ import ( "regexp" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -75,6 +76,10 @@ func ResourceBinaryAuthorizationAttestor() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "attestation_authority_note": { Type: schema.TypeList, @@ -448,9 +453,9 @@ func resourceBinaryAuthorizationAttestorDelete(d *schema.ResourceData, meta inte func resourceBinaryAuthorizationAttestorImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/attestors/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/attestors/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/binaryauthorization/resource_binary_authorization_policy.go b/google/services/binaryauthorization/resource_binary_authorization_policy.go index 84554438722..f84df2582e6 100644 --- a/google/services/binaryauthorization/resource_binary_authorization_policy.go +++ b/google/services/binaryauthorization/resource_binary_authorization_policy.go @@ -25,6 +25,7 @@ import ( "regexp" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -64,6 +65,10 @@ func ResourceBinaryAuthorizationPolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "default_admission_rule": { Type: schema.TypeList, @@ -494,8 +499,8 @@ func resourceBinaryAuthorizationPolicyDelete(d *schema.ResourceData, meta interf func resourceBinaryAuthorizationPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate.go b/google/services/certificatemanager/resource_certificate_manager_certificate.go index 6b8bdb3ec71..5d0b8c85cb8 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -64,6 +65,10 @@ func ResourceCertificateManagerCertificate() *schema.Resource { Version: 0, }, }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "name": { @@ -80,10 +85,13 @@ and all following characters must be a dash, underscore, letter or digit.`, Description: `A human-readable description of the resource.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Set of label tags associated with the Certificate resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of label tags associated with the Certificate resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { Type: schema.TypeString, @@ -261,6 +269,19 @@ Leaf certificate comes first, followed by intermediate ones if any.`, }, ExactlyOneOf: []string{"self_managed", "managed"}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -286,12 +307,6 @@ func resourceCertificateManagerCertificateCreate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerCertificateLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } scopeProp, err := expandCertificateManagerCertificateScope(d.Get("scope"), d, config) if err != nil { return err @@ -310,6 +325,12 @@ func resourceCertificateManagerCertificateCreate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("managed"); !tpgresource.IsEmptyValue(reflect.ValueOf(managedProp)) && (ok || !reflect.DeepEqual(v, managedProp)) { obj["managed"] = managedProp } + labelsProp, err := expandCertificateManagerCertificateEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CertificateManagerBasePath}}projects/{{project}}/locations/{{location}}/certificates?certificateId={{name}}") if err != nil { @@ -417,6 +438,12 @@ func resourceCertificateManagerCertificateRead(d *schema.ResourceData, meta inte if err := d.Set("managed", flattenCertificateManagerCertificateManaged(res["managed"], d, config)); err != nil { return fmt.Errorf("Error reading Certificate: %s", err) } + if err := d.Set("terraform_labels", flattenCertificateManagerCertificateTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Certificate: %s", err) + } + if err := d.Set("effective_labels", flattenCertificateManagerCertificateEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Certificate: %s", err) + } return nil } @@ -443,10 +470,10 @@ func resourceCertificateManagerCertificateUpdate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerCertificateLabels(d.Get("labels"), d, config) + labelsProp, err := expandCertificateManagerCertificateEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -462,7 +489,7 @@ func resourceCertificateManagerCertificateUpdate(d *schema.ResourceData, meta in updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -560,9 +587,9 @@ func resourceCertificateManagerCertificateDelete(d *schema.ResourceData, meta in func resourceCertificateManagerCertificateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/certificates/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/certificates/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -582,7 +609,18 @@ func flattenCertificateManagerCertificateDescription(v interface{}, d *schema.Re } func flattenCertificateManagerCertificateLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCertificateManagerCertificateScope(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -688,19 +726,27 @@ func flattenCertificateManagerCertificateManagedAuthorizationAttemptInfoDetails( return v } -func expandCertificateManagerCertificateDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCertificateManagerCertificateLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenCertificateManagerCertificateTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenCertificateManagerCertificateEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandCertificateManagerCertificateDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandCertificateManagerCertificateScope(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -926,6 +972,17 @@ func expandCertificateManagerCertificateManagedAuthorizationAttemptInfoDetails(v return v, nil } +func expandCertificateManagerCertificateEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func ResourceCertificateManagerCertificateUpgradeV0(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { log.Printf("[DEBUG] Attributes before migration: %#v", rawState) // Version 0 didn't support location. Default it to global. diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_generated_test.go b/google/services/certificatemanager/resource_certificate_manager_certificate_generated_test.go index 6fbf2706350..6bbd0d83d2c 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_generated_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_generated_test.go @@ -49,7 +49,7 @@ func TestAccCertificateManagerCertificate_certificateManagerGoogleManagedCertifi ResourceName: "google_certificate_manager_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_managed", "name", "location"}, + ImportStateVerifyIgnore: []string{"self_managed", "name", "location", "labels", "terraform_labels"}, }, }, }) @@ -61,6 +61,9 @@ resource "google_certificate_manager_certificate" "default" { name = "tf-test-dns-cert%{random_suffix}" description = "The default cert" scope = "EDGE_CACHE" + labels = { + env = "test" + } managed { domains = [ google_certificate_manager_dns_authorization.instance.domain, @@ -107,7 +110,7 @@ func TestAccCertificateManagerCertificate_certificateManagerGoogleManagedCertifi ResourceName: "google_certificate_manager_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_managed", "name", "location"}, + ImportStateVerifyIgnore: []string{"self_managed", "name", "location", "labels", "terraform_labels"}, }, }, }) @@ -210,7 +213,7 @@ func TestAccCertificateManagerCertificate_certificateManagerSelfManagedCertifica ResourceName: "google_certificate_manager_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_managed", "name", "location"}, + ImportStateVerifyIgnore: []string{"self_managed", "name", "location", "labels", "terraform_labels"}, }, }, }) @@ -249,7 +252,7 @@ func TestAccCertificateManagerCertificate_certificateManagerSelfManagedCertifica ResourceName: "google_certificate_manager_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_managed", "name", "location"}, + ImportStateVerifyIgnore: []string{"self_managed", "name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config.go b/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config.go index 0b9a4a061cb..de5c666729c 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,11 @@ func ResourceCertificateManagerCertificateIssuanceConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "certificate_authority_config": { Type: schema.TypeList, @@ -118,7 +124,11 @@ the certificate has been issued and at least 7 days before it expires.`, Optional: true, ForceNew: true, Description: `'Set of label tags associated with the CertificateIssuanceConfig resource. - An object containing a list of "key": value pairs. Example: { "name": "wrench", "count": "3" }.`, + An object containing a list of "key": value pairs. Example: { "name": "wrench", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { @@ -135,6 +145,20 @@ the certificate has been issued and at least 7 days before it expires.`, accurate to nanoseconds with up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -185,18 +209,18 @@ func resourceCertificateManagerCertificateIssuanceConfigCreate(d *schema.Resourc } else if v, ok := d.GetOkExists("lifetime"); !tpgresource.IsEmptyValue(reflect.ValueOf(lifetimeProp)) && (ok || !reflect.DeepEqual(v, lifetimeProp)) { obj["lifetime"] = lifetimeProp } - labelsProp, err := expandCertificateManagerCertificateIssuanceConfigLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } certificateAuthorityConfigProp, err := expandCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfig(d.Get("certificate_authority_config"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("certificate_authority_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(certificateAuthorityConfigProp)) && (ok || !reflect.DeepEqual(v, certificateAuthorityConfigProp)) { obj["certificateAuthorityConfig"] = certificateAuthorityConfigProp } + labelsProp, err := expandCertificateManagerCertificateIssuanceConfigEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CertificateManagerBasePath}}projects/{{project}}/locations/{{location}}/certificateIssuanceConfigs?certificateIssuanceConfigId={{name}}") if err != nil { @@ -316,6 +340,12 @@ func resourceCertificateManagerCertificateIssuanceConfigRead(d *schema.ResourceD if err := d.Set("certificate_authority_config", flattenCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfig(res["certificateAuthorityConfig"], d, config)); err != nil { return fmt.Errorf("Error reading CertificateIssuanceConfig: %s", err) } + if err := d.Set("terraform_labels", flattenCertificateManagerCertificateIssuanceConfigTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateIssuanceConfig: %s", err) + } + if err := d.Set("effective_labels", flattenCertificateManagerCertificateIssuanceConfigEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateIssuanceConfig: %s", err) + } return nil } @@ -376,9 +406,9 @@ func resourceCertificateManagerCertificateIssuanceConfigDelete(d *schema.Resourc func resourceCertificateManagerCertificateIssuanceConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/certificateIssuanceConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/certificateIssuanceConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -431,7 +461,18 @@ func flattenCertificateManagerCertificateIssuanceConfigUpdateTime(v interface{}, } func flattenCertificateManagerCertificateIssuanceConfigLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -464,6 +505,25 @@ func flattenCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfi return v } +func flattenCertificateManagerCertificateIssuanceConfigTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCertificateManagerCertificateIssuanceConfigEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandCertificateManagerCertificateIssuanceConfigDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -480,17 +540,6 @@ func expandCertificateManagerCertificateIssuanceConfigLifetime(v interface{}, d return v, nil } -func expandCertificateManagerCertificateIssuanceConfigLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -532,3 +581,14 @@ func expandCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfig func expandCertificateManagerCertificateIssuanceConfigCertificateAuthorityConfigCertificateAuthorityServiceConfigCaPool(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandCertificateManagerCertificateIssuanceConfigEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config_generated_test.go b/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config_generated_test.go index f538d05fa1b..84980ece1f5 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config_generated_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_issuance_config_generated_test.go @@ -49,7 +49,7 @@ func TestAccCertificateManagerCertificateIssuanceConfig_certificateManagerCertif ResourceName: "google_certificate_manager_certificate_issuance_config.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_map.go b/google/services/certificatemanager/resource_certificate_manager_certificate_map.go index 5d6e4a2cc5b..b7831ef061d 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_map.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_map.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceCertificateManagerCertificateMap() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -61,11 +67,14 @@ globally and match the pattern 'projects/*/locations/*/certificateMaps/*'.`, Description: `A human-readable description of the resource.`, }, "labels": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - Description: `Set of labels associated with a Certificate Map resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of labels associated with a Certificate Map resource. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "create_time": { Type: schema.TypeString, @@ -74,6 +83,12 @@ globally and match the pattern 'projects/*/locations/*/certificateMaps/*'.`, accurate to nanoseconds with up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "gclb_targets": { Type: schema.TypeList, Computed: true, @@ -119,6 +134,13 @@ This field is part of a union field 'target_proxy': Only one of 'targetHttpsProx }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -151,10 +173,10 @@ func resourceCertificateManagerCertificateMapCreate(d *schema.ResourceData, meta } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerCertificateMapLabels(d.Get("labels"), d, config) + labelsProp, err := expandCertificateManagerCertificateMapEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -267,6 +289,12 @@ func resourceCertificateManagerCertificateMapRead(d *schema.ResourceData, meta i if err := d.Set("gclb_targets", flattenCertificateManagerCertificateMapGclbTargets(res["gclbTargets"], d, config)); err != nil { return fmt.Errorf("Error reading CertificateMap: %s", err) } + if err := d.Set("terraform_labels", flattenCertificateManagerCertificateMapTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateMap: %s", err) + } + if err := d.Set("effective_labels", flattenCertificateManagerCertificateMapEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateMap: %s", err) + } return nil } @@ -293,10 +321,10 @@ func resourceCertificateManagerCertificateMapUpdate(d *schema.ResourceData, meta } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerCertificateMapLabels(d.Get("labels"), d, config) + labelsProp, err := expandCertificateManagerCertificateMapEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -312,7 +340,7 @@ func resourceCertificateManagerCertificateMapUpdate(d *schema.ResourceData, meta updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -410,9 +438,9 @@ func resourceCertificateManagerCertificateMapDelete(d *schema.ResourceData, meta func resourceCertificateManagerCertificateMapImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/certificateMaps/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/certificateMaps/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -440,7 +468,18 @@ func flattenCertificateManagerCertificateMapUpdateTime(v interface{}, d *schema. } func flattenCertificateManagerCertificateMapLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCertificateManagerCertificateMapGclbTargets(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -498,11 +537,30 @@ func flattenCertificateManagerCertificateMapGclbTargetsTargetSslProxy(v interfac return v } +func flattenCertificateManagerCertificateMapTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCertificateManagerCertificateMapEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandCertificateManagerCertificateMapDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCertificateManagerCertificateMapLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandCertificateManagerCertificateMapEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry.go b/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry.go index 145dea7c75c..bbe756087a7 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceCertificateManagerCertificateMapEntry() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "certificates": { Type: schema.TypeList, @@ -90,11 +96,14 @@ selecting a proper certificate.`, }, "labels": { Type: schema.TypeMap, - Computed: true, Optional: true, Description: `Set of labels associated with a Certificate Map Entry. An object containing a list of "key": value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "matcher": { @@ -111,11 +120,24 @@ Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "state": { Type: schema.TypeString, Computed: true, Description: `A serving state of this Certificate Map Entry.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -148,12 +170,6 @@ func resourceCertificateManagerCertificateMapEntryCreate(d *schema.ResourceData, } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerCertificateMapEntryLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } certificatesProp, err := expandCertificateManagerCertificateMapEntryCertificates(d.Get("certificates"), d, config) if err != nil { return err @@ -172,6 +188,12 @@ func resourceCertificateManagerCertificateMapEntryCreate(d *schema.ResourceData, } else if v, ok := d.GetOkExists("matcher"); !tpgresource.IsEmptyValue(reflect.ValueOf(matcherProp)) && (ok || !reflect.DeepEqual(v, matcherProp)) { obj["matcher"] = matcherProp } + labelsProp, err := expandCertificateManagerCertificateMapEntryEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } nameProp, err := expandCertificateManagerCertificateMapEntryName(d.Get("name"), d, config) if err != nil { return err @@ -297,6 +319,12 @@ func resourceCertificateManagerCertificateMapEntryRead(d *schema.ResourceData, m if err := d.Set("matcher", flattenCertificateManagerCertificateMapEntryMatcher(res["matcher"], d, config)); err != nil { return fmt.Errorf("Error reading CertificateMapEntry: %s", err) } + if err := d.Set("terraform_labels", flattenCertificateManagerCertificateMapEntryTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateMapEntry: %s", err) + } + if err := d.Set("effective_labels", flattenCertificateManagerCertificateMapEntryEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateMapEntry: %s", err) + } if err := d.Set("name", flattenCertificateManagerCertificateMapEntryName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading CertificateMapEntry: %s", err) } @@ -326,18 +354,18 @@ func resourceCertificateManagerCertificateMapEntryUpdate(d *schema.ResourceData, } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerCertificateMapEntryLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } certificatesProp, err := expandCertificateManagerCertificateMapEntryCertificates(d.Get("certificates"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("certificates"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, certificatesProp)) { obj["certificates"] = certificatesProp } + labelsProp, err := expandCertificateManagerCertificateMapEntryEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CertificateManagerBasePath}}projects/{{project}}/locations/global/certificateMaps/{{map}}/certificateMapEntries/{{name}}") if err != nil { @@ -351,13 +379,13 @@ func resourceCertificateManagerCertificateMapEntryUpdate(d *schema.ResourceData, updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("certificates") { updateMask = append(updateMask, "certificates") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -453,9 +481,9 @@ func resourceCertificateManagerCertificateMapEntryDelete(d *schema.ResourceData, func resourceCertificateManagerCertificateMapEntryImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/certificateMaps/(?P[^/]+)/certificateMapEntries/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/certificateMaps/(?P[^/]+)/certificateMapEntries/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -483,7 +511,18 @@ func flattenCertificateManagerCertificateMapEntryUpdateTime(v interface{}, d *sc } func flattenCertificateManagerCertificateMapEntryLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCertificateManagerCertificateMapEntryCertificates(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -502,26 +541,34 @@ func flattenCertificateManagerCertificateMapEntryMatcher(v interface{}, d *schem return v } -func flattenCertificateManagerCertificateMapEntryName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenCertificateManagerCertificateMapEntryTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v } - return tpgresource.NameFromSelfLinkStateFunc(v) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } -func expandCertificateManagerCertificateMapEntryDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil +func flattenCertificateManagerCertificateMapEntryEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } -func expandCertificateManagerCertificateMapEntryLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenCertificateManagerCertificateMapEntryName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + return v } - return m, nil + return tpgresource.NameFromSelfLinkStateFunc(v) +} + +func expandCertificateManagerCertificateMapEntryDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandCertificateManagerCertificateMapEntryCertificates(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -536,6 +583,17 @@ func expandCertificateManagerCertificateMapEntryMatcher(v interface{}, d tpgreso return v, nil } +func expandCertificateManagerCertificateMapEntryEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func expandCertificateManagerCertificateMapEntryName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.GetResourceNameFromSelfLink(v.(string)), nil } diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry_generated_test.go b/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry_generated_test.go index bde7bf4c59e..45099b9f2c8 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry_generated_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_map_entry_generated_test.go @@ -49,7 +49,7 @@ func TestAccCertificateManagerCertificateMapEntry_certificateManagerCertificateM ResourceName: "google_certificate_manager_certificate_map_entry.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"map"}, + ImportStateVerifyIgnore: []string{"map", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_certificate_map_generated_test.go b/google/services/certificatemanager/resource_certificate_manager_certificate_map_generated_test.go index 211a1d38017..a16f47095b1 100644 --- a/google/services/certificatemanager/resource_certificate_manager_certificate_map_generated_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_certificate_map_generated_test.go @@ -49,7 +49,7 @@ func TestAccCertificateManagerCertificateMap_certificateManagerCertificateMapBas ResourceName: "google_certificate_manager_certificate_map.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_dns_authorization.go b/google/services/certificatemanager/resource_certificate_manager_dns_authorization.go index ad95ea68997..b41d2c182a0 100644 --- a/google/services/certificatemanager/resource_certificate_manager_dns_authorization.go +++ b/google/services/certificatemanager/resource_certificate_manager_dns_authorization.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceCertificateManagerDnsAuthorization() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "domain": { Type: schema.TypeString, @@ -70,10 +76,13 @@ and all following characters must be a dash, underscore, letter or digit.`, Description: `A human-readable description of the resource.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Set of label tags associated with the DNS Authorization resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of label tags associated with the DNS Authorization resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "dns_resource_record": { Type: schema.TypeList, @@ -102,6 +111,19 @@ E.g. '_acme-challenge.example.com'.`, }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -127,18 +149,18 @@ func resourceCertificateManagerDnsAuthorizationCreate(d *schema.ResourceData, me } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerDnsAuthorizationLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } domainProp, err := expandCertificateManagerDnsAuthorizationDomain(d.Get("domain"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("domain"); !tpgresource.IsEmptyValue(reflect.ValueOf(domainProp)) && (ok || !reflect.DeepEqual(v, domainProp)) { obj["domain"] = domainProp } + labelsProp, err := expandCertificateManagerDnsAuthorizationEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CertificateManagerBasePath}}projects/{{project}}/locations/global/dnsAuthorizations?dnsAuthorizationId={{name}}") if err != nil { @@ -246,6 +268,12 @@ func resourceCertificateManagerDnsAuthorizationRead(d *schema.ResourceData, meta if err := d.Set("dns_resource_record", flattenCertificateManagerDnsAuthorizationDnsResourceRecord(res["dnsResourceRecord"], d, config)); err != nil { return fmt.Errorf("Error reading DnsAuthorization: %s", err) } + if err := d.Set("terraform_labels", flattenCertificateManagerDnsAuthorizationTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading DnsAuthorization: %s", err) + } + if err := d.Set("effective_labels", flattenCertificateManagerDnsAuthorizationEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading DnsAuthorization: %s", err) + } return nil } @@ -272,10 +300,10 @@ func resourceCertificateManagerDnsAuthorizationUpdate(d *schema.ResourceData, me } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCertificateManagerDnsAuthorizationLabels(d.Get("labels"), d, config) + labelsProp, err := expandCertificateManagerDnsAuthorizationEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -291,7 +319,7 @@ func resourceCertificateManagerDnsAuthorizationUpdate(d *schema.ResourceData, me updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -389,9 +417,9 @@ func resourceCertificateManagerDnsAuthorizationDelete(d *schema.ResourceData, me func resourceCertificateManagerDnsAuthorizationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/dnsAuthorizations/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/dnsAuthorizations/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -411,7 +439,18 @@ func flattenCertificateManagerDnsAuthorizationDescription(v interface{}, d *sche } func flattenCertificateManagerDnsAuthorizationLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCertificateManagerDnsAuthorizationDomain(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -447,11 +486,34 @@ func flattenCertificateManagerDnsAuthorizationDnsResourceRecordData(v interface{ return v } +func flattenCertificateManagerDnsAuthorizationTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCertificateManagerDnsAuthorizationEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandCertificateManagerDnsAuthorizationDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCertificateManagerDnsAuthorizationLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandCertificateManagerDnsAuthorizationDomain(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandCertificateManagerDnsAuthorizationEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -461,7 +523,3 @@ func expandCertificateManagerDnsAuthorizationLabels(v interface{}, d tpgresource } return m, nil } - -func expandCertificateManagerDnsAuthorizationDomain(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/certificatemanager/resource_certificate_manager_dns_authorization_generated_test.go b/google/services/certificatemanager/resource_certificate_manager_dns_authorization_generated_test.go index b5c1afcb67c..e1b25407cd7 100644 --- a/google/services/certificatemanager/resource_certificate_manager_dns_authorization_generated_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_dns_authorization_generated_test.go @@ -49,7 +49,7 @@ func TestAccCertificateManagerDnsAuthorization_certificateManagerDnsAuthorizatio ResourceName: "google_certificate_manager_dns_authorization.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_dns_authorization_test.go b/google/services/certificatemanager/resource_certificate_manager_dns_authorization_test.go index e6f985f2cce..1dfd8cdd317 100644 --- a/google/services/certificatemanager/resource_certificate_manager_dns_authorization_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_dns_authorization_test.go @@ -28,7 +28,7 @@ func TestAccCertificateManagerDnsAuthorization_update(t *testing.T) { ResourceName: "google_certificate_manager_dns_authorization.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, { Config: testAccCertificateManagerDnsAuthorization_update1(context), @@ -37,7 +37,7 @@ func TestAccCertificateManagerDnsAuthorization_update(t *testing.T) { ResourceName: "google_certificate_manager_dns_authorization.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_trust_config.go b/google/services/certificatemanager/resource_certificate_manager_trust_config.go index d1cf95be1f1..91ef2deaa0f 100644 --- a/google/services/certificatemanager/resource_certificate_manager_trust_config.go +++ b/google/services/certificatemanager/resource_certificate_manager_trust_config.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,11 @@ func ResourceCertificateManagerTrustConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -65,11 +71,14 @@ func ResourceCertificateManagerTrustConfig() *schema.Resource { Description: `One or more paragraphs of text description of a trust config.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `Set of label tags associated with the trust config.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Set of label tags associated with the trust config. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "trust_stores": { Type: schema.TypeList, @@ -122,6 +131,20 @@ Each certificate provided in PEM format may occupy up to 5kB.`, A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -149,12 +172,6 @@ func resourceCertificateManagerTrustConfigCreate(d *schema.ResourceData, meta in } obj := make(map[string]interface{}) - labelsProp, err := expandCertificateManagerTrustConfigLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandCertificateManagerTrustConfigDescription(d.Get("description"), d, config) if err != nil { return err @@ -167,6 +184,12 @@ func resourceCertificateManagerTrustConfigCreate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("trust_stores"); !tpgresource.IsEmptyValue(reflect.ValueOf(trustStoresProp)) && (ok || !reflect.DeepEqual(v, trustStoresProp)) { obj["trustStores"] = trustStoresProp } + labelsProp, err := expandCertificateManagerTrustConfigEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CertificateManagerBasePath}}projects/{{project}}/locations/{{location}}/trustConfigs?trustConfigId={{name}}") if err != nil { @@ -277,6 +300,12 @@ func resourceCertificateManagerTrustConfigRead(d *schema.ResourceData, meta inte if err := d.Set("trust_stores", flattenCertificateManagerTrustConfigTrustStores(res["trustStores"], d, config)); err != nil { return fmt.Errorf("Error reading TrustConfig: %s", err) } + if err := d.Set("terraform_labels", flattenCertificateManagerTrustConfigTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading TrustConfig: %s", err) + } + if err := d.Set("effective_labels", flattenCertificateManagerTrustConfigEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading TrustConfig: %s", err) + } return nil } @@ -409,9 +438,9 @@ func resourceCertificateManagerTrustConfigDelete(d *schema.ResourceData, meta in func resourceCertificateManagerTrustConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/trustConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/trustConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -435,7 +464,18 @@ func flattenCertificateManagerTrustConfigUpdateTime(v interface{}, d *schema.Res } func flattenCertificateManagerTrustConfigLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCertificateManagerTrustConfigDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -505,15 +545,23 @@ func flattenCertificateManagerTrustConfigTrustStoresIntermediateCasPemCertificat return v } -func expandCertificateManagerTrustConfigLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenCertificateManagerTrustConfigTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenCertificateManagerTrustConfigEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandCertificateManagerTrustConfigDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -600,3 +648,14 @@ func expandCertificateManagerTrustConfigTrustStoresIntermediateCas(v interface{} func expandCertificateManagerTrustConfigTrustStoresIntermediateCasPemCertificate(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandCertificateManagerTrustConfigEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/certificatemanager/resource_certificate_manager_trust_config_generated_test.go b/google/services/certificatemanager/resource_certificate_manager_trust_config_generated_test.go index 9e338217d02..b4276418ef4 100644 --- a/google/services/certificatemanager/resource_certificate_manager_trust_config_generated_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_trust_config_generated_test.go @@ -49,7 +49,7 @@ func TestAccCertificateManagerTrustConfig_certificateManagerTrustConfigExample(t ResourceName: "google_certificate_manager_trust_config.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/certificatemanager/resource_certificate_manager_trust_config_test.go b/google/services/certificatemanager/resource_certificate_manager_trust_config_test.go index b0e7773e665..17b32795f90 100644 --- a/google/services/certificatemanager/resource_certificate_manager_trust_config_test.go +++ b/google/services/certificatemanager/resource_certificate_manager_trust_config_test.go @@ -25,17 +25,19 @@ func TestAccCertificateManagerTrustConfig_update(t *testing.T) { Config: testAccCertificateManagerTrustConfig_update0(context), }, { - ResourceName: "google_certificate_manager_trust_config.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_certificate_manager_trust_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccCertificateManagerTrustConfig_update1(context), }, { - ResourceName: "google_certificate_manager_trust_config.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_certificate_manager_trust_config.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/cloudasset/resource_cloud_asset_project_feed.go b/google/services/cloudasset/resource_cloud_asset_project_feed.go index cace9ba7875..823c063e6eb 100644 --- a/google/services/cloudasset/resource_cloud_asset_project_feed.go +++ b/google/services/cloudasset/resource_cloud_asset_project_feed.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceCloudAssetProjectFeed() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "feed_id": { Type: schema.TypeString, diff --git a/google/services/cloudbuild/data_source_google_cloudbuild_trigger.go b/google/services/cloudbuild/data_source_google_cloudbuild_trigger.go index 47bbe2e8363..76ca75cfbd7 100644 --- a/google/services/cloudbuild/data_source_google_cloudbuild_trigger.go +++ b/google/services/cloudbuild/data_source_google_cloudbuild_trigger.go @@ -34,7 +34,16 @@ func dataSourceGoogleCloudBuildTriggerRead(d *schema.ResourceData, meta interfac } id = strings.ReplaceAll(id, "/locations/global/", "/") - d.SetId(id) - return resourceCloudBuildTriggerRead(d, meta) + + err = resourceCloudBuildTriggerRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config.go b/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config.go index 1dd59d7b814..87b101563ae 100644 --- a/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config.go +++ b/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceCloudBuildBitbucketServerConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "api_key": { Type: schema.TypeString, @@ -695,9 +700,9 @@ func resourceCloudBuildBitbucketServerConfigDelete(d *schema.ResourceData, meta func resourceCloudBuildBitbucketServerConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/bitbucketServerConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/bitbucketServerConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config_generated_test.go b/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config_generated_test.go index cc42f391e4b..3070d2749d8 100644 --- a/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config_generated_test.go +++ b/google/services/cloudbuild/resource_cloudbuild_bitbucket_server_config_generated_test.go @@ -76,7 +76,6 @@ func TestAccCloudBuildBitbucketServerConfig_cloudbuildBitbucketServerConfigPeere t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "peered-network"), "random_suffix": acctest.RandString(t, 10), } @@ -106,9 +105,9 @@ resource "google_project_service" "servicenetworking" { service = "servicenetworking.googleapis.com" disable_on_destroy = false } - -data "google_compute_network" "vpc_network" { - name = "%{network_name}" + +resource "google_compute_network" "vpc_network" { + name = "tf-test-vpc-network%{random_suffix}" depends_on = [google_project_service.servicenetworking] } @@ -117,11 +116,11 @@ resource "google_compute_global_address" "private_ip_alloc" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.vpc_network.id + network = google_compute_network.vpc_network.id } resource "google_service_networking_connection" "default" { - network = data.google_compute_network.vpc_network.id + network = google_compute_network.vpc_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] depends_on = [google_project_service.servicenetworking] @@ -138,7 +137,7 @@ resource "google_cloudbuild_bitbucket_server_config" "bbs-config-with-peered-net } username = "test" api_key = "" - peered_network = replace(data.google_compute_network.vpc_network.id, data.google_project.project.name, data.google_project.project.number) + peered_network = replace(google_compute_network.vpc_network.id, data.google_project.project.name, data.google_project.project.number) ssl_ca = "-----BEGIN CERTIFICATE-----\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\n-----END CERTIFICATE-----\n" depends_on = [google_service_networking_connection.default] } diff --git a/google/services/cloudbuild/resource_cloudbuild_trigger.go b/google/services/cloudbuild/resource_cloudbuild_trigger.go index 2a7f42fd7ab..3cd216d684b 100644 --- a/google/services/cloudbuild/resource_cloudbuild_trigger.go +++ b/google/services/cloudbuild/resource_cloudbuild_trigger.go @@ -103,6 +103,7 @@ func ResourceCloudBuildTrigger() *schema.Resource { }, CustomizeDiff: customdiff.All( stepTimeoutCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -1869,10 +1870,10 @@ func resourceCloudBuildTriggerDelete(d *schema.ResourceData, meta interface{}) e func resourceCloudBuildTriggerImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/triggers/(?P[^/]+)", - "projects/(?P[^/]+)/triggers/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/triggers/(?P[^/]+)$", + "^projects/(?P[^/]+)/triggers/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/cloudbuild/resource_cloudbuild_worker_pool.go b/google/services/cloudbuild/resource_cloudbuild_worker_pool.go index c7d2c6c2d15..3ca16cab0a4 100644 --- a/google/services/cloudbuild/resource_cloudbuild_worker_pool.go +++ b/google/services/cloudbuild/resource_cloudbuild_worker_pool.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceCloudbuildWorkerPool() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "location": { @@ -66,19 +71,18 @@ func ResourceCloudbuildWorkerPool() *schema.Resource { Description: "User-defined name of the `WorkerPool`.", }, - "annotations": { - Type: schema.TypeMap, - Optional: true, - Description: "User specified annotations. See https://google.aip.dev/128#annotations for more details such as format and size limitations.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "display_name": { Type: schema.TypeString, Optional: true, Description: "A user-specified, human-readable name for the `WorkerPool`. If provided, this value must be 1-63 characters.", }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + "network_config": { Type: schema.TypeList, Optional: true, @@ -106,6 +110,13 @@ func ResourceCloudbuildWorkerPool() *schema.Resource { Elem: CloudbuildWorkerPoolWorkerConfigSchema(), }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "User specified annotations. See https://google.aip.dev/128#annotations for more details such as format and size limitations.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -195,8 +206,8 @@ func resourceCloudbuildWorkerPoolCreate(d *schema.ResourceData, meta interface{} obj := &cloudbuild.WorkerPool{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), DisplayName: dcl.String(d.Get("display_name").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), NetworkConfig: expandCloudbuildWorkerPoolNetworkConfig(d.Get("network_config")), Project: dcl.String(project), WorkerConfig: expandCloudbuildWorkerPoolWorkerConfig(d.Get("worker_config")), @@ -249,8 +260,8 @@ func resourceCloudbuildWorkerPoolRead(d *schema.ResourceData, meta interface{}) obj := &cloudbuild.WorkerPool{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), DisplayName: dcl.String(d.Get("display_name").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), NetworkConfig: expandCloudbuildWorkerPoolNetworkConfig(d.Get("network_config")), Project: dcl.String(project), WorkerConfig: expandCloudbuildWorkerPoolWorkerConfig(d.Get("worker_config")), @@ -284,12 +295,12 @@ func resourceCloudbuildWorkerPoolRead(d *schema.ResourceData, meta interface{}) if err = d.Set("name", res.Name); err != nil { return fmt.Errorf("error setting name in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) - } if err = d.Set("display_name", res.DisplayName); err != nil { return fmt.Errorf("error setting display_name in state: %s", err) } + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } if err = d.Set("network_config", flattenCloudbuildWorkerPoolNetworkConfig(res.NetworkConfig)); err != nil { return fmt.Errorf("error setting network_config in state: %s", err) } @@ -299,6 +310,9 @@ func resourceCloudbuildWorkerPoolRead(d *schema.ResourceData, meta interface{}) if err = d.Set("worker_config", flattenCloudbuildWorkerPoolWorkerConfig(res.WorkerConfig)); err != nil { return fmt.Errorf("error setting worker_config in state: %s", err) } + if err = d.Set("annotations", flattenCloudbuildWorkerPoolAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -327,8 +341,8 @@ func resourceCloudbuildWorkerPoolUpdate(d *schema.ResourceData, meta interface{} obj := &cloudbuild.WorkerPool{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), DisplayName: dcl.String(d.Get("display_name").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), NetworkConfig: expandCloudbuildWorkerPoolNetworkConfig(d.Get("network_config")), Project: dcl.String(project), WorkerConfig: expandCloudbuildWorkerPoolWorkerConfig(d.Get("worker_config")), @@ -376,8 +390,8 @@ func resourceCloudbuildWorkerPoolDelete(d *schema.ResourceData, meta interface{} obj := &cloudbuild.WorkerPool{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), DisplayName: dcl.String(d.Get("display_name").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), NetworkConfig: expandCloudbuildWorkerPoolNetworkConfig(d.Get("network_config")), Project: dcl.String(project), WorkerConfig: expandCloudbuildWorkerPoolWorkerConfig(d.Get("worker_config")), @@ -486,3 +500,18 @@ func flattenCloudbuildWorkerPoolWorkerConfig(obj *cloudbuild.WorkerPoolWorkerCon return []interface{}{transformed} } + +func flattenCloudbuildWorkerPoolAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/cloudbuild/resource_cloudbuild_worker_pool_test.go b/google/services/cloudbuild/resource_cloudbuild_worker_pool_test.go index a3a0a15c391..9ad95a41a42 100644 --- a/google/services/cloudbuild/resource_cloudbuild_worker_pool_test.go +++ b/google/services/cloudbuild/resource_cloudbuild_worker_pool_test.go @@ -40,9 +40,10 @@ func TestAccCloudbuildWorkerPool_basic(t *testing.T) { Config: testAccCloudbuildWorkerPool_updated(context), }, { - ImportState: true, - ImportStateVerify: true, - ResourceName: "google_cloudbuild_worker_pool.pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, + ResourceName: "google_cloudbuild_worker_pool.pool", }, { Config: testAccCloudbuildWorkerPool_noWorkerConfig(context), @@ -80,6 +81,11 @@ resource "google_cloudbuild_worker_pool" "pool" { machine_type = "e2-standard-4" no_external_ip = false } + + annotations = { + env = "foo" + default_expiration_ms = 3600000 + } } `, context) } @@ -99,7 +105,7 @@ func TestAccCloudbuildWorkerPool_withNetwork(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), "project": envvar.GetTestProjectFromEnv(), - "network_name": acctest.BootstrapSharedTestNetwork(t, "cloudbuild-workerpool"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "cloudbuild-workerpool-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -126,20 +132,6 @@ data "google_compute_network" "network" { name = "%{network_name}" } -resource "google_compute_global_address" "worker_range" { - name = "tf-test-worker-pool-range%{random_suffix}" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.network.id -} - -resource "google_service_networking_connection" "worker_pool_conn" { - network = data.google_compute_network.network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.worker_range.name] -} - resource "google_cloudbuild_worker_pool" "pool" { name = "pool%{random_suffix}" location = "europe-west1" @@ -152,7 +144,6 @@ resource "google_cloudbuild_worker_pool" "pool" { peered_network = data.google_compute_network.network.id peered_network_ip_range = "/29" } - depends_on = [google_service_networking_connection.worker_pool_conn] } `, context) } diff --git a/google/services/cloudbuildv2/resource_cloudbuildv2_connection.go b/google/services/cloudbuildv2/resource_cloudbuildv2_connection.go index d786561f9b0..549f27a97b3 100644 --- a/google/services/cloudbuildv2/resource_cloudbuildv2_connection.go +++ b/google/services/cloudbuildv2/resource_cloudbuildv2_connection.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceCloudbuildv2Connection() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "location": { @@ -66,19 +71,18 @@ func ResourceCloudbuildv2Connection() *schema.Resource { Description: "Immutable. The resource name of the connection, in the format `projects/{project}/locations/{location}/connections/{connection_id}`.", }, - "annotations": { - Type: schema.TypeMap, - Optional: true, - Description: "Allows clients to store small amounts of arbitrary data.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "disabled": { Type: schema.TypeBool, Optional: true, Description: "If disabled is set to true, functionality is disabled for this connection. Repository based API methods and webhooks processing for repositories in this connection will be disabled.", }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + "github_config": { Type: schema.TypeList, Optional: true, @@ -115,6 +119,13 @@ func ResourceCloudbuildv2Connection() *schema.Resource { Description: "The project for the resource", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "Allows clients to store small amounts of arbitrary data.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -400,8 +411,8 @@ func resourceCloudbuildv2ConnectionCreate(d *schema.ResourceData, meta interface obj := &cloudbuildv2.Connection{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Disabled: dcl.Bool(d.Get("disabled").(bool)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), GithubConfig: expandCloudbuildv2ConnectionGithubConfig(d.Get("github_config")), GithubEnterpriseConfig: expandCloudbuildv2ConnectionGithubEnterpriseConfig(d.Get("github_enterprise_config")), GitlabConfig: expandCloudbuildv2ConnectionGitlabConfig(d.Get("gitlab_config")), @@ -455,8 +466,8 @@ func resourceCloudbuildv2ConnectionRead(d *schema.ResourceData, meta interface{} obj := &cloudbuildv2.Connection{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Disabled: dcl.Bool(d.Get("disabled").(bool)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), GithubConfig: expandCloudbuildv2ConnectionGithubConfig(d.Get("github_config")), GithubEnterpriseConfig: expandCloudbuildv2ConnectionGithubEnterpriseConfig(d.Get("github_enterprise_config")), GitlabConfig: expandCloudbuildv2ConnectionGitlabConfig(d.Get("gitlab_config")), @@ -491,12 +502,12 @@ func resourceCloudbuildv2ConnectionRead(d *schema.ResourceData, meta interface{} if err = d.Set("name", res.Name); err != nil { return fmt.Errorf("error setting name in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) - } if err = d.Set("disabled", res.Disabled); err != nil { return fmt.Errorf("error setting disabled in state: %s", err) } + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } if err = d.Set("github_config", flattenCloudbuildv2ConnectionGithubConfig(res.GithubConfig)); err != nil { return fmt.Errorf("error setting github_config in state: %s", err) } @@ -509,6 +520,9 @@ func resourceCloudbuildv2ConnectionRead(d *schema.ResourceData, meta interface{} if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } + if err = d.Set("annotations", flattenCloudbuildv2ConnectionAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -537,8 +551,8 @@ func resourceCloudbuildv2ConnectionUpdate(d *schema.ResourceData, meta interface obj := &cloudbuildv2.Connection{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Disabled: dcl.Bool(d.Get("disabled").(bool)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), GithubConfig: expandCloudbuildv2ConnectionGithubConfig(d.Get("github_config")), GithubEnterpriseConfig: expandCloudbuildv2ConnectionGithubEnterpriseConfig(d.Get("github_enterprise_config")), GitlabConfig: expandCloudbuildv2ConnectionGitlabConfig(d.Get("gitlab_config")), @@ -587,8 +601,8 @@ func resourceCloudbuildv2ConnectionDelete(d *schema.ResourceData, meta interface obj := &cloudbuildv2.Connection{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Disabled: dcl.Bool(d.Get("disabled").(bool)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), GithubConfig: expandCloudbuildv2ConnectionGithubConfig(d.Get("github_config")), GithubEnterpriseConfig: expandCloudbuildv2ConnectionGithubEnterpriseConfig(d.Get("github_enterprise_config")), GitlabConfig: expandCloudbuildv2ConnectionGitlabConfig(d.Get("gitlab_config")), @@ -892,3 +906,18 @@ func flattenCloudbuildv2ConnectionInstallationState(obj *cloudbuildv2.Connection return []interface{}{transformed} } + +func flattenCloudbuildv2ConnectionAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/cloudbuildv2/resource_cloudbuildv2_connection_generated_test.go b/google/services/cloudbuildv2/resource_cloudbuildv2_connection_generated_test.go index 3ea534780df..ef2a89f6984 100644 --- a/google/services/cloudbuildv2/resource_cloudbuildv2_connection_generated_test.go +++ b/google/services/cloudbuildv2/resource_cloudbuildv2_connection_generated_test.go @@ -51,9 +51,10 @@ func TestAccCloudbuildv2Connection_GheCompleteConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GheCompleteConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -76,17 +77,19 @@ func TestAccCloudbuildv2Connection_GheConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GheConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, { Config: testAccCloudbuildv2Connection_GheConnectionUpdate0(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -109,9 +112,10 @@ func TestAccCloudbuildv2Connection_GhePrivConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GhePrivConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -134,17 +138,19 @@ func TestAccCloudbuildv2Connection_GhePrivUpdateConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GhePrivUpdateConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, { Config: testAccCloudbuildv2Connection_GhePrivUpdateConnectionUpdate0(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -167,17 +173,19 @@ func TestAccCloudbuildv2Connection_GithubConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GithubConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, { Config: testAccCloudbuildv2Connection_GithubConnectionUpdate0(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -199,9 +207,10 @@ func TestAccCloudbuildv2Connection_GitlabConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GitlabConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -223,17 +232,19 @@ func TestAccCloudbuildv2Connection_GleConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GleConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, { Config: testAccCloudbuildv2Connection_GleConnectionUpdate0(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -255,17 +266,19 @@ func TestAccCloudbuildv2Connection_GleOldConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GleOldConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, { Config: testAccCloudbuildv2Connection_GleOldConnectionUpdate0(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -287,9 +300,10 @@ func TestAccCloudbuildv2Connection_GlePrivConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GlePrivConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -311,17 +325,19 @@ func TestAccCloudbuildv2Connection_GlePrivUpdateConnection(t *testing.T) { Config: testAccCloudbuildv2Connection_GlePrivUpdateConnection(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, { Config: testAccCloudbuildv2Connection_GlePrivUpdateConnectionUpdate0(context), }, { - ResourceName: "google_cloudbuildv2_connection.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_connection.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -330,9 +346,8 @@ func TestAccCloudbuildv2Connection_GlePrivUpdateConnection(t *testing.T) { func testAccCloudbuildv2Connection_GheCompleteConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-staging-test.com" @@ -343,7 +358,8 @@ resource "google_cloudbuildv2_connection" "primary" { webhook_secret_secret_version = "projects/gcb-terraform-creds/secrets/ghe-webhook-secret/versions/latest" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -353,15 +369,15 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GheConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-staging-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -371,9 +387,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GheConnectionUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-staging-test.com" @@ -384,7 +399,8 @@ resource "google_cloudbuildv2_connection" "primary" { webhook_secret_secret_version = "projects/gcb-terraform-creds/secrets/ghe-webhook-secret/versions/latest" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -394,9 +410,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GhePrivConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-private-ca.com" @@ -408,7 +423,8 @@ resource "google_cloudbuildv2_connection" "primary" { ssl_ca = "-----BEGIN CERTIFICATE-----\nMIIEXTCCA0WgAwIBAgIUANaBCc9j/xdKJHU0sgmv6yE2WCIwDQYJKoZIhvcNAQEL\nBQAwLDEUMBIGA1UEChMLUHJvY3RvciBFbmcxFDASBgNVBAMTC1Byb2N0b3ItZW5n\nMB4XDTIxMDcxNTIwMDcwMloXDTIyMDcxNTIwMDcwMVowADCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBAMVel7I88DkhwW445BNPBZvJNTV1AreHdz4um4U1\nop2+4L7JeNrUs5SRc0fzeOyOmA9ZzTDu9hBC7zj/sVNUy6cIQGCj32sr5SCAEIat\nnFZlzmVqJPT4J5NAaE37KO5347myTJEBrvpq8az4CtvX0yUzPK0gbUmaSaztVi4o\ndbJLKyv575xCLC/Hu6fIHBDH19eG1Ath9VpuAOkttRRoxu2VqijJZrGqaS+0o+OX\nrLi5HMtZbZjgQB4mc1g3ZDKX/gynxr+CDNaqNOqxuog33Tl5OcOk9DrR3MInaE7F\nyQFuH9mzF64AqOoTf7Tr/eAIz5XVt8K51nk+fSybEfKVwtMCAwEAAaOCAaEwggGd\nMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBQU/9dYyqMz\nv9rOMwPZcoIRMDAQCjAfBgNVHSMEGDAWgBTkQGTiCkLCmv/Awxdz5TAVRmyFfDCB\njQYIKwYBBQUHAQEEgYAwfjB8BggrBgEFBQcwAoZwaHR0cDovL3ByaXZhdGVjYS1j\nb250ZW50LTYxYWEyYzA5LTAwMDAtMjJjMi05ZjYyLWQ0ZjU0N2Y4MDIwMC5zdG9y\nYWdlLmdvb2dsZWFwaXMuY29tLzQxNGU4ZTJjZjU2ZWEyYzQxNmM0L2NhLmNydDAo\nBgNVHREBAf8EHjAcghpnaGUucHJvY3Rvci1wcml2YXRlLWNhLmNvbTCBggYDVR0f\nBHsweTB3oHWgc4ZxaHR0cDovL3ByaXZhdGVjYS1jb250ZW50LTYxYWEyYzA5LTAw\nMDAtMjJjMi05ZjYyLWQ0ZjU0N2Y4MDIwMC5zdG9yYWdlLmdvb2dsZWFwaXMuY29t\nLzQxNGU4ZTJjZjU2ZWEyYzQxNmM0L2NybC5jcmwwDQYJKoZIhvcNAQELBQADggEB\nABo6BQLEZZ+YNiDuv2sRvcxSopQQb7fZjqIA9XOA35pNSKay2SncODnNvfsdRnOp\ncoy25sQSIzWyJ9zWl8DZ6evoOu5csZ2PoFqx5LsIq37w+ZcwD6DM8Zm7JqASxmxx\nGqTF0nHC4Aw8q8aJBeRD3PsSkfN5Q3DP3nTDnLyd0l+yPIkHUbZMoiFHX3BkhCng\nG96mYy/y3t16ghfV9lZkXpD/JK5aiN0bTHCDRc69owgfYiAcAqzBJ9gfZ90MBgzv\ngTTQel5dHg49SYXfnUpTy0HdQLEcoggOF8Q8V+xKdKa6eVbrvjJrkEJmvIQI5iCR\nhNvKR25mx8JUopqEXmONmqU=\n-----END CERTIFICATE-----\n\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgITMwWN+62nLcgyLa7p+jD1K90g6TANBgkqhkiG9w0BAQsF\nADAsMRQwEgYDVQQKEwtQcm9jdG9yIEVuZzEUMBIGA1UEAxMLUHJvY3Rvci1lbmcw\nHhcNMjEwNzEyMTM1OTQ0WhcNMzEwNzEwMTM1OTQzWjAsMRQwEgYDVQQKEwtQcm9j\ndG9yIEVuZzEUMBIGA1UEAxMLUHJvY3Rvci1lbmcwggEiMA0GCSqGSIb3DQEBAQUA\nA4IBDwAwggEKAoIBAQCYqJP5Qt90jIbld2dtuUV/zIkBFsTe4fapJfhBji03xBpN\nO1Yxj/jPSZ67Kdeoy0lEwvc2hL5FQGhIjLMR0mzOyN4fk/DZiA/4tAVi7hJyqpUC\n71JSwp7MwXL1b26CSE1MhcoCqA/E4iZxfJfF/ef4lhmC24UEmu8FEbldoy+6OysB\nRu7dGDwicW5F9h7eSkpGAsCRdJHh65iUx/IH0C4Ux2UZRDZdj6wVbuVu9tb938xF\nyRuVClONoLSn/lwdzeV7hQmBSm8qmfgbNPbYRaNLz3hOpsT+27aDQp2/pxue8hFJ\nd7We3+Lr5O4IL45PBwhVEAiFZqde6d4qViNEB2qTAgMBAAGjYzBhMA4GA1UdDwEB\n/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTkQGTiCkLCmv/Awxdz\n5TAVRmyFfDAfBgNVHSMEGDAWgBTkQGTiCkLCmv/Awxdz5TAVRmyFfDANBgkqhkiG\n9w0BAQsFAAOCAQEAfy5BJsWdx0oWWi7SFg9MbryWjBVPJl93UqACgG0Cgh813O/x\nlDZQhGO/ZFVhHz/WgooE/HgVNoVJTubKLLzz+zCkOB0wa3GMqJDyFjhFmUtd/3VM\nZh0ZQ+JWYsAiZW4VITj5xEn/d/B3xCFWGC1vhvhptEJ8Fo2cE1yM2pzk08NqFWoY\n4FaH0sbxWgyCKwTmtcYDbnx4FYuddryGCIxbYizqUK1dr4DGKeHonhm/d234Ew3x\n3vIBPoHMOfBec/coP1xAf5o+F+MRMO/sQ3tTGgyOH18lwsHo9SmXCrmOwVQPKrEw\nm+A+5TjXLmenyaBhqXa0vkAZYJhWdROhWC0VTA==\n-----END CERTIFICATE-----\n" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -418,15 +434,15 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GhePrivUpdateConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-staging-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -436,9 +452,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GhePrivUpdateConnectionUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-private-ca.com" @@ -450,7 +465,8 @@ resource "google_cloudbuildv2_connection" "primary" { ssl_ca = "-----BEGIN CERTIFICATE-----\nMIIEXTCCA0WgAwIBAgIUANaBCc9j/xdKJHU0sgmv6yE2WCIwDQYJKoZIhvcNAQEL\nBQAwLDEUMBIGA1UEChMLUHJvY3RvciBFbmcxFDASBgNVBAMTC1Byb2N0b3ItZW5n\nMB4XDTIxMDcxNTIwMDcwMloXDTIyMDcxNTIwMDcwMVowADCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBAMVel7I88DkhwW445BNPBZvJNTV1AreHdz4um4U1\nop2+4L7JeNrUs5SRc0fzeOyOmA9ZzTDu9hBC7zj/sVNUy6cIQGCj32sr5SCAEIat\nnFZlzmVqJPT4J5NAaE37KO5347myTJEBrvpq8az4CtvX0yUzPK0gbUmaSaztVi4o\ndbJLKyv575xCLC/Hu6fIHBDH19eG1Ath9VpuAOkttRRoxu2VqijJZrGqaS+0o+OX\nrLi5HMtZbZjgQB4mc1g3ZDKX/gynxr+CDNaqNOqxuog33Tl5OcOk9DrR3MInaE7F\nyQFuH9mzF64AqOoTf7Tr/eAIz5XVt8K51nk+fSybEfKVwtMCAwEAAaOCAaEwggGd\nMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBQU/9dYyqMz\nv9rOMwPZcoIRMDAQCjAfBgNVHSMEGDAWgBTkQGTiCkLCmv/Awxdz5TAVRmyFfDCB\njQYIKwYBBQUHAQEEgYAwfjB8BggrBgEFBQcwAoZwaHR0cDovL3ByaXZhdGVjYS1j\nb250ZW50LTYxYWEyYzA5LTAwMDAtMjJjMi05ZjYyLWQ0ZjU0N2Y4MDIwMC5zdG9y\nYWdlLmdvb2dsZWFwaXMuY29tLzQxNGU4ZTJjZjU2ZWEyYzQxNmM0L2NhLmNydDAo\nBgNVHREBAf8EHjAcghpnaGUucHJvY3Rvci1wcml2YXRlLWNhLmNvbTCBggYDVR0f\nBHsweTB3oHWgc4ZxaHR0cDovL3ByaXZhdGVjYS1jb250ZW50LTYxYWEyYzA5LTAw\nMDAtMjJjMi05ZjYyLWQ0ZjU0N2Y4MDIwMC5zdG9yYWdlLmdvb2dsZWFwaXMuY29t\nLzQxNGU4ZTJjZjU2ZWEyYzQxNmM0L2NybC5jcmwwDQYJKoZIhvcNAQELBQADggEB\nABo6BQLEZZ+YNiDuv2sRvcxSopQQb7fZjqIA9XOA35pNSKay2SncODnNvfsdRnOp\ncoy25sQSIzWyJ9zWl8DZ6evoOu5csZ2PoFqx5LsIq37w+ZcwD6DM8Zm7JqASxmxx\nGqTF0nHC4Aw8q8aJBeRD3PsSkfN5Q3DP3nTDnLyd0l+yPIkHUbZMoiFHX3BkhCng\nG96mYy/y3t16ghfV9lZkXpD/JK5aiN0bTHCDRc69owgfYiAcAqzBJ9gfZ90MBgzv\ngTTQel5dHg49SYXfnUpTy0HdQLEcoggOF8Q8V+xKdKa6eVbrvjJrkEJmvIQI5iCR\nhNvKR25mx8JUopqEXmONmqU=\n-----END CERTIFICATE-----\n\n-----BEGIN CERTIFICATE-----\nMIIDSDCCAjCgAwIBAgITMwWN+62nLcgyLa7p+jD1K90g6TANBgkqhkiG9w0BAQsF\nADAsMRQwEgYDVQQKEwtQcm9jdG9yIEVuZzEUMBIGA1UEAxMLUHJvY3Rvci1lbmcw\nHhcNMjEwNzEyMTM1OTQ0WhcNMzEwNzEwMTM1OTQzWjAsMRQwEgYDVQQKEwtQcm9j\ndG9yIEVuZzEUMBIGA1UEAxMLUHJvY3Rvci1lbmcwggEiMA0GCSqGSIb3DQEBAQUA\nA4IBDwAwggEKAoIBAQCYqJP5Qt90jIbld2dtuUV/zIkBFsTe4fapJfhBji03xBpN\nO1Yxj/jPSZ67Kdeoy0lEwvc2hL5FQGhIjLMR0mzOyN4fk/DZiA/4tAVi7hJyqpUC\n71JSwp7MwXL1b26CSE1MhcoCqA/E4iZxfJfF/ef4lhmC24UEmu8FEbldoy+6OysB\nRu7dGDwicW5F9h7eSkpGAsCRdJHh65iUx/IH0C4Ux2UZRDZdj6wVbuVu9tb938xF\nyRuVClONoLSn/lwdzeV7hQmBSm8qmfgbNPbYRaNLz3hOpsT+27aDQp2/pxue8hFJ\nd7We3+Lr5O4IL45PBwhVEAiFZqde6d4qViNEB2qTAgMBAAGjYzBhMA4GA1UdDwEB\n/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTkQGTiCkLCmv/Awxdz\n5TAVRmyFfDAfBgNVHSMEGDAWgBTkQGTiCkLCmv/Awxdz5TAVRmyFfDANBgkqhkiG\n9w0BAQsFAAOCAQEAfy5BJsWdx0oWWi7SFg9MbryWjBVPJl93UqACgG0Cgh813O/x\nlDZQhGO/ZFVhHz/WgooE/HgVNoVJTubKLLzz+zCkOB0wa3GMqJDyFjhFmUtd/3VM\nZh0ZQ+JWYsAiZW4VITj5xEn/d/B3xCFWGC1vhvhptEJ8Fo2cE1yM2pzk08NqFWoY\n4FaH0sbxWgyCKwTmtcYDbnx4FYuddryGCIxbYizqUK1dr4DGKeHonhm/d234Ew3x\n3vIBPoHMOfBec/coP1xAf5o+F+MRMO/sQ3tTGgyOH18lwsHo9SmXCrmOwVQPKrEw\nm+A+5TjXLmenyaBhqXa0vkAZYJhWdROhWC0VTA==\n-----END CERTIFICATE-----\n" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -462,11 +478,6 @@ func testAccCloudbuildv2Connection_GithubConnection(context map[string]interface resource "google_cloudbuildv2_connection" "primary" { location = "%{region}" name = "tf-test-connection%{random_suffix}" - - annotations = { - somekey = "somevalue" - } - disabled = true github_config { @@ -478,6 +489,10 @@ resource "google_cloudbuildv2_connection" "primary" { } project = "%{project_name}" + + annotations = { + somekey = "somevalue" + } } @@ -489,13 +504,6 @@ func testAccCloudbuildv2Connection_GithubConnectionUpdate0(context map[string]in resource "google_cloudbuildv2_connection" "primary" { location = "%{region}" name = "tf-test-connection%{random_suffix}" - - annotations = { - otherkey = "othervalue" - - somekey = "somevalue" - } - disabled = false github_config { @@ -507,6 +515,12 @@ resource "google_cloudbuildv2_connection" "primary" { } project = "%{project_name}" + + annotations = { + otherkey = "othervalue" + + somekey = "somevalue" + } } @@ -516,9 +530,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GitlabConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -532,7 +545,8 @@ resource "google_cloudbuildv2_connection" "primary" { webhook_secret_secret_version = "projects/407304063574/secrets/gle-webhook-secret/versions/latest" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -542,9 +556,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GleConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -559,7 +572,8 @@ resource "google_cloudbuildv2_connection" "primary" { host_uri = "https://gle-us-central1.gcb-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -569,9 +583,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GleConnectionUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -586,7 +599,8 @@ resource "google_cloudbuildv2_connection" "primary" { host_uri = "https://gle-old.gcb-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -596,9 +610,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GleOldConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -613,7 +626,8 @@ resource "google_cloudbuildv2_connection" "primary" { host_uri = "https://gle-old.gcb-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -623,9 +637,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GleOldConnectionUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -640,7 +653,8 @@ resource "google_cloudbuildv2_connection" "primary" { host_uri = "https://gle-us-central1.gcb-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -650,9 +664,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GlePrivConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -673,7 +686,8 @@ resource "google_cloudbuildv2_connection" "primary" { ssl_ca = "-----BEGIN CERTIFICATE-----\nMIIDajCCAlKgAwIBAgIUedXFQAw0eUDTe6gmPKVyRvBlDi8wDQYJKoZIhvcNAQEL\nBQAwVjELMAkGA1UEBhMCVVMxGzAZBgNVBAoMEkdvb2dsZSBDbG91ZCBCdWlsZDEq\nMCgGA1UEAwwhZ2xlLXRlc3QucHJvY3Rvci1zdGFnaW5nLXRlc3QuY29tMB4XDTIy\nMDcyNTE3Mzg0MFoXDTIzMDcyNTE3Mzg0MFowVjELMAkGA1UEBhMCVVMxGzAZBgNV\nBAoMEkdvb2dsZSBDbG91ZCBCdWlsZDEqMCgGA1UEAwwhZ2xlLXRlc3QucHJvY3Rv\nci1zdGFnaW5nLXRlc3QuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC\nAQEAr7H0J4nZBL0ed3duVDbOdlnqJuLHZVBWIOp0DBVWPzdx+4eDCi86czxzXmVG\nuZXSpvg3az4QHGWs2HwlBCDk6tp2QT6F1gR6TE8S2yp+04BDhtg1DUopWY+f+Xi7\ni1tXQG7OTDByez3V6MR0t0bVv/LOJlvOngWbJ32qZqfbj5W8MACR/3u7KBjGs/bm\nrbDMga3YOOIa+DVLdLCwzc7kFlM9W7sezkUM/FhhellaxLu4i5O86sywJYMEo7VG\nj3FUS3XiDyKW68xOpE4svW7LiZEAnnLSsPdELO2bzhR/md84Jjvm99i6yP0StrMB\n+X2EwPYmTLMktdJyMUn/vhFYzQIDAQABozAwLjAsBgNVHREEJTAjgiFnbGUtdGVz\ndC5wcm9jdG9yLXN0YWdpbmctdGVzdC5jb20wDQYJKoZIhvcNAQELBQADggEBAJ+6\nH7WI9+hqrT4zpyc/CpH6VuviYezo1qd4/6M496dKlrHd11+xAXkBRZ4FFyoDFMgz\nO7YihNTBuONwiv21YN3OV9xoTExGx/IIkHNaueL2ZPkbVcJWQEWtEITp9Mo0qDIj\nkKjEQ5A+I4T4CiQ/OAhqtN8gR8ZUKGRJw+s2sE+yCIvRfoeJ4YU7NfUL1vSXxKfy\nHz3awR7t5qnCsvcShZtmiZ4xsc6o/tKqL5nAwNk1M6rPMY/+/PY70juLf1GNNDoZ\nA2Co+g6uI/FwAFAO5ZYKRLlstgNcPXerNdxXhpRZKMxGj8WfQ3z0Eu4cGtTUmDz5\npTam4bqToj22/MN2IhA=\n-----END CERTIFICATE-----\n" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -683,9 +697,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GlePrivUpdateConnection(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -700,7 +713,8 @@ resource "google_cloudbuildv2_connection" "primary" { host_uri = "https://gle-us-central1.gcb-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -710,9 +724,8 @@ resource "google_cloudbuildv2_connection" "primary" { func testAccCloudbuildv2Connection_GlePrivUpdateConnectionUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_cloudbuildv2_connection" "primary" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -733,7 +746,8 @@ resource "google_cloudbuildv2_connection" "primary" { ssl_ca = "-----BEGIN CERTIFICATE-----\nMIIDajCCAlKgAwIBAgIUedXFQAw0eUDTe6gmPKVyRvBlDi8wDQYJKoZIhvcNAQEL\nBQAwVjELMAkGA1UEBhMCVVMxGzAZBgNVBAoMEkdvb2dsZSBDbG91ZCBCdWlsZDEq\nMCgGA1UEAwwhZ2xlLXRlc3QucHJvY3Rvci1zdGFnaW5nLXRlc3QuY29tMB4XDTIy\nMDcyNTE3Mzg0MFoXDTIzMDcyNTE3Mzg0MFowVjELMAkGA1UEBhMCVVMxGzAZBgNV\nBAoMEkdvb2dsZSBDbG91ZCBCdWlsZDEqMCgGA1UEAwwhZ2xlLXRlc3QucHJvY3Rv\nci1zdGFnaW5nLXRlc3QuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC\nAQEAr7H0J4nZBL0ed3duVDbOdlnqJuLHZVBWIOp0DBVWPzdx+4eDCi86czxzXmVG\nuZXSpvg3az4QHGWs2HwlBCDk6tp2QT6F1gR6TE8S2yp+04BDhtg1DUopWY+f+Xi7\ni1tXQG7OTDByez3V6MR0t0bVv/LOJlvOngWbJ32qZqfbj5W8MACR/3u7KBjGs/bm\nrbDMga3YOOIa+DVLdLCwzc7kFlM9W7sezkUM/FhhellaxLu4i5O86sywJYMEo7VG\nj3FUS3XiDyKW68xOpE4svW7LiZEAnnLSsPdELO2bzhR/md84Jjvm99i6yP0StrMB\n+X2EwPYmTLMktdJyMUn/vhFYzQIDAQABozAwLjAsBgNVHREEJTAjgiFnbGUtdGVz\ndC5wcm9jdG9yLXN0YWdpbmctdGVzdC5jb20wDQYJKoZIhvcNAQELBQADggEBAJ+6\nH7WI9+hqrT4zpyc/CpH6VuviYezo1qd4/6M496dKlrHd11+xAXkBRZ4FFyoDFMgz\nO7YihNTBuONwiv21YN3OV9xoTExGx/IIkHNaueL2ZPkbVcJWQEWtEITp9Mo0qDIj\nkKjEQ5A+I4T4CiQ/OAhqtN8gR8ZUKGRJw+s2sE+yCIvRfoeJ4YU7NfUL1vSXxKfy\nHz3awR7t5qnCsvcShZtmiZ4xsc6o/tKqL5nAwNk1M6rPMY/+/PY70juLf1GNNDoZ\nA2Co+g6uI/FwAFAO5ZYKRLlstgNcPXerNdxXhpRZKMxGj8WfQ3z0Eu4cGtTUmDz5\npTam4bqToj22/MN2IhA=\n-----END CERTIFICATE-----\n" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } diff --git a/google/services/cloudbuildv2/resource_cloudbuildv2_repository.go b/google/services/cloudbuildv2/resource_cloudbuildv2_repository.go index 667dbda9985..02435821ac3 100644 --- a/google/services/cloudbuildv2/resource_cloudbuildv2_repository.go +++ b/google/services/cloudbuildv2/resource_cloudbuildv2_repository.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,10 @@ func ResourceCloudbuildv2Repository() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "name": { @@ -72,12 +77,11 @@ func ResourceCloudbuildv2Repository() *schema.Resource { Description: "Required. Git Clone HTTPS URI.", }, - "annotations": { + "effective_annotations": { Type: schema.TypeMap, - Optional: true, + Computed: true, ForceNew: true, - Description: "Allows clients to store small amounts of arbitrary data.", - Elem: &schema.Schema{Type: schema.TypeString}, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", }, "location": { @@ -97,6 +101,14 @@ func ResourceCloudbuildv2Repository() *schema.Resource { Description: "The project for the resource", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: "Allows clients to store small amounts of arbitrary data.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -129,7 +141,7 @@ func resourceCloudbuildv2RepositoryCreate(d *schema.ResourceData, meta interface Name: dcl.String(d.Get("name").(string)), Connection: dcl.String(d.Get("parent_connection").(string)), RemoteUri: dcl.String(d.Get("remote_uri").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Location: dcl.StringOrNil(d.Get("location").(string)), Project: dcl.String(project), } @@ -182,7 +194,7 @@ func resourceCloudbuildv2RepositoryRead(d *schema.ResourceData, meta interface{} Name: dcl.String(d.Get("name").(string)), Connection: dcl.String(d.Get("parent_connection").(string)), RemoteUri: dcl.String(d.Get("remote_uri").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Location: dcl.StringOrNil(d.Get("location").(string)), Project: dcl.String(project), } @@ -218,8 +230,8 @@ func resourceCloudbuildv2RepositoryRead(d *schema.ResourceData, meta interface{} if err = d.Set("remote_uri", res.RemoteUri); err != nil { return fmt.Errorf("error setting remote_uri in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) } if err = d.Set("location", res.Location); err != nil { return fmt.Errorf("error setting location in state: %s", err) @@ -227,6 +239,9 @@ func resourceCloudbuildv2RepositoryRead(d *schema.ResourceData, meta interface{} if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } + if err = d.Set("annotations", flattenCloudbuildv2RepositoryAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -251,7 +266,7 @@ func resourceCloudbuildv2RepositoryDelete(d *schema.ResourceData, meta interface Name: dcl.String(d.Get("name").(string)), Connection: dcl.String(d.Get("parent_connection").(string)), RemoteUri: dcl.String(d.Get("remote_uri").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Location: dcl.StringOrNil(d.Get("location").(string)), Project: dcl.String(project), } @@ -301,3 +316,18 @@ func resourceCloudbuildv2RepositoryImport(d *schema.ResourceData, meta interface return []*schema.ResourceData{d}, nil } + +func flattenCloudbuildv2RepositoryAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/cloudbuildv2/resource_cloudbuildv2_repository_generated_test.go b/google/services/cloudbuildv2/resource_cloudbuildv2_repository_generated_test.go index a736951d30d..c669301b099 100644 --- a/google/services/cloudbuildv2/resource_cloudbuildv2_repository_generated_test.go +++ b/google/services/cloudbuildv2/resource_cloudbuildv2_repository_generated_test.go @@ -51,9 +51,10 @@ func TestAccCloudbuildv2Repository_GheRepository(t *testing.T) { Config: testAccCloudbuildv2Repository_GheRepository(context), }, { - ResourceName: "google_cloudbuildv2_repository.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_repository.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -76,9 +77,10 @@ func TestAccCloudbuildv2Repository_GithubRepository(t *testing.T) { Config: testAccCloudbuildv2Repository_GithubRepository(context), }, { - ResourceName: "google_cloudbuildv2_repository.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_repository.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -100,9 +102,10 @@ func TestAccCloudbuildv2Repository_GitlabRepository(t *testing.T) { Config: testAccCloudbuildv2Repository_GitlabRepository(context), }, { - ResourceName: "google_cloudbuildv2_repository.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_repository.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -124,9 +127,10 @@ func TestAccCloudbuildv2Repository_GleRepository(t *testing.T) { Config: testAccCloudbuildv2Repository_GleRepository(context), }, { - ResourceName: "google_cloudbuildv2_repository.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_cloudbuildv2_repository.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"annotations"}, }, }, }) @@ -138,19 +142,17 @@ resource "google_cloudbuildv2_repository" "primary" { name = "tf-test-repository%{random_suffix}" parent_connection = google_cloudbuildv2_connection.ghe_complete.name remote_uri = "https://ghe.proctor-staging-test.com/proctorteam/regional_test.git" + location = "%{region}" + project = "%{project_name}" annotations = { some-key = "some-value" } - - location = "%{region}" - project = "%{project_name}" } resource "google_cloudbuildv2_connection" "ghe_complete" { - location = "%{region}" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "%{region}" + name = "tf-test-connection%{random_suffix}" github_enterprise_config { host_uri = "https://ghe.proctor-staging-test.com" @@ -161,7 +163,8 @@ resource "google_cloudbuildv2_connection" "ghe_complete" { webhook_secret_secret_version = "projects/gcb-terraform-creds/secrets/ghe-webhook-secret/versions/latest" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -174,21 +177,14 @@ resource "google_cloudbuildv2_repository" "primary" { name = "tf-test-repository%{random_suffix}" parent_connection = google_cloudbuildv2_connection.github_update.name remote_uri = "https://github.com/gcb-repos-robot/tf-demo.git" - annotations = {} location = "%{region}" project = "%{project_name}" + annotations = {} } resource "google_cloudbuildv2_connection" "github_update" { location = "%{region}" name = "tf-test-connection%{random_suffix}" - - annotations = { - otherkey = "othervalue" - - somekey = "somevalue" - } - disabled = false github_config { @@ -200,6 +196,12 @@ resource "google_cloudbuildv2_connection" "github_update" { } project = "%{project_name}" + + annotations = { + otherkey = "othervalue" + + somekey = "somevalue" + } } @@ -212,19 +214,17 @@ resource "google_cloudbuildv2_repository" "primary" { name = "tf-test-repository%{random_suffix}" parent_connection = google_cloudbuildv2_connection.gitlab.name remote_uri = "https://gitlab.com/proctor-eng-team/terraform-testing.git" + location = "us-west1" + project = "%{project_name}" annotations = { some-key = "some-value" } - - location = "us-west1" - project = "%{project_name}" } resource "google_cloudbuildv2_connection" "gitlab" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -238,7 +238,8 @@ resource "google_cloudbuildv2_connection" "gitlab" { webhook_secret_secret_version = "projects/407304063574/secrets/gle-webhook-secret/versions/latest" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } @@ -251,19 +252,17 @@ resource "google_cloudbuildv2_repository" "primary" { name = "tf-test-repository%{random_suffix}" parent_connection = google_cloudbuildv2_connection.gle.name remote_uri = "https://gle-us-central1.gcb-test.com/proctor-test/smoketest.git" + location = "us-west1" + project = "%{project_name}" annotations = { some-key = "some-value" } - - location = "us-west1" - project = "%{project_name}" } resource "google_cloudbuildv2_connection" "gle" { - location = "us-west1" - name = "tf-test-connection%{random_suffix}" - annotations = {} + location = "us-west1" + name = "tf-test-connection%{random_suffix}" gitlab_config { authorizer_credential { @@ -278,7 +277,8 @@ resource "google_cloudbuildv2_connection" "gle" { host_uri = "https://gle-us-central1.gcb-test.com" } - project = "%{project_name}" + project = "%{project_name}" + annotations = {} } diff --git a/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline.go b/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline.go index 667277ffa41..add486b19b3 100644 --- a/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline.go +++ b/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,11 @@ func ResourceClouddeployDeliveryPipeline() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "location": { @@ -66,24 +72,22 @@ func ResourceClouddeployDeliveryPipeline() *schema.Resource { Description: "Name of the `DeliveryPipeline`. Format is [a-z][a-z0-9\\-]{0,62}.", }, - "annotations": { - Type: schema.TypeMap, - Optional: true, - Description: "User annotations. These attributes can only be set and used by the user, and not by Google Cloud Deploy. See https://google.aip.dev/128#annotations for more details such as format and size limitations.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "description": { Type: schema.TypeString, Optional: true, Description: "Description of the `DeliveryPipeline`. Max length is 255 characters.", }, - "labels": { + "effective_annotations": { Type: schema.TypeMap, - Optional: true, - Description: "Labels are attributes that can be set and used by both the user and by Google Cloud Deploy. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "project": { @@ -109,6 +113,13 @@ func ResourceClouddeployDeliveryPipeline() *schema.Resource { Description: "When suspended, no new releases or rollouts can be created, but in-progress ones will complete.", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "User annotations. These attributes can only be set and used by the user, and not by Google Cloud Deploy. See https://google.aip.dev/128#annotations for more details such as format and size limitations.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "condition": { Type: schema.TypeList, Computed: true, @@ -128,6 +139,19 @@ func ResourceClouddeployDeliveryPipeline() *schema.Resource { Description: "This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Labels are attributes that can be set and used by both the user and by Google Cloud Deploy. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "uid": { Type: schema.TypeString, Computed: true, @@ -545,9 +569,9 @@ func resourceClouddeployDeliveryPipelineCreate(d *schema.ResourceData, meta inte obj := &clouddeploy.DeliveryPipeline{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), SerialPipeline: expandClouddeployDeliveryPipelineSerialPipeline(d.Get("serial_pipeline")), Suspended: dcl.Bool(d.Get("suspended").(bool)), @@ -600,9 +624,9 @@ func resourceClouddeployDeliveryPipelineRead(d *schema.ResourceData, meta interf obj := &clouddeploy.DeliveryPipeline{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), SerialPipeline: expandClouddeployDeliveryPipelineSerialPipeline(d.Get("serial_pipeline")), Suspended: dcl.Bool(d.Get("suspended").(bool)), @@ -636,14 +660,14 @@ func resourceClouddeployDeliveryPipelineRead(d *schema.ResourceData, meta interf if err = d.Set("name", res.Name); err != nil { return fmt.Errorf("error setting name in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) - } if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) @@ -654,6 +678,9 @@ func resourceClouddeployDeliveryPipelineRead(d *schema.ResourceData, meta interf if err = d.Set("suspended", res.Suspended); err != nil { return fmt.Errorf("error setting suspended in state: %s", err) } + if err = d.Set("annotations", flattenClouddeployDeliveryPipelineAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("condition", flattenClouddeployDeliveryPipelineCondition(res.Condition)); err != nil { return fmt.Errorf("error setting condition in state: %s", err) } @@ -663,6 +690,12 @@ func resourceClouddeployDeliveryPipelineRead(d *schema.ResourceData, meta interf if err = d.Set("etag", res.Etag); err != nil { return fmt.Errorf("error setting etag in state: %s", err) } + if err = d.Set("labels", flattenClouddeployDeliveryPipelineLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } + if err = d.Set("terraform_labels", flattenClouddeployDeliveryPipelineTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("uid", res.Uid); err != nil { return fmt.Errorf("error setting uid in state: %s", err) } @@ -682,9 +715,9 @@ func resourceClouddeployDeliveryPipelineUpdate(d *schema.ResourceData, meta inte obj := &clouddeploy.DeliveryPipeline{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), SerialPipeline: expandClouddeployDeliveryPipelineSerialPipeline(d.Get("serial_pipeline")), Suspended: dcl.Bool(d.Get("suspended").(bool)), @@ -732,9 +765,9 @@ func resourceClouddeployDeliveryPipelineDelete(d *schema.ResourceData, meta inte obj := &clouddeploy.DeliveryPipeline{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), SerialPipeline: expandClouddeployDeliveryPipelineSerialPipeline(d.Get("serial_pipeline")), Suspended: dcl.Bool(d.Get("suspended").(bool)), @@ -1326,3 +1359,48 @@ func flattenClouddeployDeliveryPipelineConditionTargetsTypeCondition(obj *cloudd return []interface{}{transformed} } + +func flattenClouddeployDeliveryPipelineLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenClouddeployDeliveryPipelineTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenClouddeployDeliveryPipelineAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline_generated_test.go b/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline_generated_test.go index db2096f5aa7..68bf6cb492e 100644 --- a/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline_generated_test.go +++ b/google/services/clouddeploy/resource_clouddeploy_delivery_pipeline_generated_test.go @@ -51,17 +51,19 @@ func TestAccClouddeployDeliveryPipeline_DeliveryPipeline(t *testing.T) { Config: testAccClouddeployDeliveryPipeline_DeliveryPipeline(context), }, { - ResourceName: "google_clouddeploy_delivery_pipeline.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_delivery_pipeline.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, { Config: testAccClouddeployDeliveryPipeline_DeliveryPipelineUpdate0(context), }, { - ResourceName: "google_clouddeploy_delivery_pipeline.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_delivery_pipeline.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, }, }) @@ -70,24 +72,10 @@ func TestAccClouddeployDeliveryPipeline_DeliveryPipeline(t *testing.T) { func testAccClouddeployDeliveryPipeline_DeliveryPipeline(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "%{region}" - name = "tf-test-pipeline%{random_suffix}" - - annotations = { - my_first_annotation = "example-annotation-1" - - my_second_annotation = "example-annotation-2" - } - + location = "%{region}" + name = "tf-test-pipeline%{random_suffix}" description = "basic description" - - labels = { - my_first_label = "example-label-1" - - my_second_label = "example-label-2" - } - - project = "%{project_name}" + project = "%{project_name}" serial_pipeline { stages { @@ -108,6 +96,18 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { target_id = "example-target-two" } } + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + + labels = { + my_first_label = "example-label-1" + + my_second_label = "example-label-2" + } } @@ -117,24 +117,10 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { func testAccClouddeployDeliveryPipeline_DeliveryPipelineUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "%{region}" - name = "tf-test-pipeline%{random_suffix}" - - annotations = { - my_second_annotation = "updated-example-annotation-2" - - my_third_annotation = "example-annotation-3" - } - + location = "%{region}" + name = "tf-test-pipeline%{random_suffix}" description = "updated description" - - labels = { - my_second_label = "updated-example-label-2" - - my_third_label = "example-label-3" - } - - project = "%{project_name}" + project = "%{project_name}" serial_pipeline { stages { @@ -149,6 +135,18 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { } suspended = true + + annotations = { + my_second_annotation = "updated-example-annotation-2" + + my_third_annotation = "example-annotation-3" + } + + labels = { + my_second_label = "updated-example-label-2" + + my_third_label = "example-label-3" + } } diff --git a/google/services/clouddeploy/resource_clouddeploy_target.go b/google/services/clouddeploy/resource_clouddeploy_target.go index d9fdf599a13..2c215d35418 100644 --- a/google/services/clouddeploy/resource_clouddeploy_target.go +++ b/google/services/clouddeploy/resource_clouddeploy_target.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,11 @@ func ResourceClouddeployTarget() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "location": { @@ -66,13 +72,6 @@ func ResourceClouddeployTarget() *schema.Resource { Description: "Name of the `Target`. Format is [a-z][a-z0-9\\-]{0,62}.", }, - "annotations": { - Type: schema.TypeMap, - Optional: true, - Description: "Optional. User annotations. These attributes can only be set and used by the user, and not by Google Cloud Deploy. See https://google.aip.dev/128#annotations for more details such as format and size limitations.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "anthos_cluster": { Type: schema.TypeList, Optional: true, @@ -95,6 +94,18 @@ func ResourceClouddeployTarget() *schema.Resource { Description: "Optional. Description of the `Target`. Max length is 255 characters.", }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", + }, + "execution_configs": { Type: schema.TypeList, Computed: true, @@ -112,13 +123,6 @@ func ResourceClouddeployTarget() *schema.Resource { ConflictsWith: []string{"anthos_cluster", "run", "multi_target"}, }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: "Optional. Labels are attributes that can be set and used by both the user and by Google Cloud Deploy. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "multi_target": { Type: schema.TypeList, Optional: true, @@ -152,6 +156,13 @@ func ResourceClouddeployTarget() *schema.Resource { ConflictsWith: []string{"gke", "anthos_cluster", "multi_target"}, }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. User annotations. These attributes can only be set and used by the user, and not by Google Cloud Deploy. See https://google.aip.dev/128#annotations for more details such as format and size limitations.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -164,12 +175,25 @@ func ResourceClouddeployTarget() *schema.Resource { Description: "Optional. This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. Labels are attributes that can be set and used by both the user and by Google Cloud Deploy. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "target_id": { Type: schema.TypeString, Computed: true, Description: "Output only. Resource id of the `Target`.", }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "uid": { Type: schema.TypeString, Computed: true, @@ -293,13 +317,13 @@ func resourceClouddeployTargetCreate(d *schema.ResourceData, meta interface{}) e obj := &clouddeploy.Target{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AnthosCluster: expandClouddeployTargetAnthosCluster(d.Get("anthos_cluster")), DeployParameters: tpgresource.CheckStringMap(d.Get("deploy_parameters")), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), ExecutionConfigs: expandClouddeployTargetExecutionConfigsArray(d.Get("execution_configs")), Gke: expandClouddeployTargetGke(d.Get("gke")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), MultiTarget: expandClouddeployTargetMultiTarget(d.Get("multi_target")), Project: dcl.String(project), RequireApproval: dcl.Bool(d.Get("require_approval").(bool)), @@ -353,13 +377,13 @@ func resourceClouddeployTargetRead(d *schema.ResourceData, meta interface{}) err obj := &clouddeploy.Target{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AnthosCluster: expandClouddeployTargetAnthosCluster(d.Get("anthos_cluster")), DeployParameters: tpgresource.CheckStringMap(d.Get("deploy_parameters")), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), ExecutionConfigs: expandClouddeployTargetExecutionConfigsArray(d.Get("execution_configs")), Gke: expandClouddeployTargetGke(d.Get("gke")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), MultiTarget: expandClouddeployTargetMultiTarget(d.Get("multi_target")), Project: dcl.String(project), RequireApproval: dcl.Bool(d.Get("require_approval").(bool)), @@ -394,9 +418,6 @@ func resourceClouddeployTargetRead(d *schema.ResourceData, meta interface{}) err if err = d.Set("name", res.Name); err != nil { return fmt.Errorf("error setting name in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) - } if err = d.Set("anthos_cluster", flattenClouddeployTargetAnthosCluster(res.AnthosCluster)); err != nil { return fmt.Errorf("error setting anthos_cluster in state: %s", err) } @@ -406,15 +427,18 @@ func resourceClouddeployTargetRead(d *schema.ResourceData, meta interface{}) err if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) + } if err = d.Set("execution_configs", flattenClouddeployTargetExecutionConfigsArray(res.ExecutionConfigs)); err != nil { return fmt.Errorf("error setting execution_configs in state: %s", err) } if err = d.Set("gke", flattenClouddeployTargetGke(res.Gke)); err != nil { return fmt.Errorf("error setting gke in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) - } if err = d.Set("multi_target", flattenClouddeployTargetMultiTarget(res.MultiTarget)); err != nil { return fmt.Errorf("error setting multi_target in state: %s", err) } @@ -427,15 +451,24 @@ func resourceClouddeployTargetRead(d *schema.ResourceData, meta interface{}) err if err = d.Set("run", flattenClouddeployTargetRun(res.Run)); err != nil { return fmt.Errorf("error setting run in state: %s", err) } + if err = d.Set("annotations", flattenClouddeployTargetAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } if err = d.Set("etag", res.Etag); err != nil { return fmt.Errorf("error setting etag in state: %s", err) } + if err = d.Set("labels", flattenClouddeployTargetLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("target_id", res.TargetId); err != nil { return fmt.Errorf("error setting target_id in state: %s", err) } + if err = d.Set("terraform_labels", flattenClouddeployTargetTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("uid", res.Uid); err != nil { return fmt.Errorf("error setting uid in state: %s", err) } @@ -455,13 +488,13 @@ func resourceClouddeployTargetUpdate(d *schema.ResourceData, meta interface{}) e obj := &clouddeploy.Target{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AnthosCluster: expandClouddeployTargetAnthosCluster(d.Get("anthos_cluster")), DeployParameters: tpgresource.CheckStringMap(d.Get("deploy_parameters")), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), ExecutionConfigs: expandClouddeployTargetExecutionConfigsArray(d.Get("execution_configs")), Gke: expandClouddeployTargetGke(d.Get("gke")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), MultiTarget: expandClouddeployTargetMultiTarget(d.Get("multi_target")), Project: dcl.String(project), RequireApproval: dcl.Bool(d.Get("require_approval").(bool)), @@ -510,13 +543,13 @@ func resourceClouddeployTargetDelete(d *schema.ResourceData, meta interface{}) e obj := &clouddeploy.Target{ Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AnthosCluster: expandClouddeployTargetAnthosCluster(d.Get("anthos_cluster")), DeployParameters: tpgresource.CheckStringMap(d.Get("deploy_parameters")), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), ExecutionConfigs: expandClouddeployTargetExecutionConfigsArray(d.Get("execution_configs")), Gke: expandClouddeployTargetGke(d.Get("gke")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), MultiTarget: expandClouddeployTargetMultiTarget(d.Get("multi_target")), Project: dcl.String(project), RequireApproval: dcl.Bool(d.Get("require_approval").(bool)), @@ -737,6 +770,52 @@ func flattenClouddeployTargetRun(obj *clouddeploy.TargetRun) interface{} { return []interface{}{transformed} } + +func flattenClouddeployTargetLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenClouddeployTargetTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenClouddeployTargetAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + func flattenClouddeployTargetExecutionConfigsUsagesArray(obj []clouddeploy.TargetExecutionConfigsUsagesEnum) interface{} { if obj == nil { return nil diff --git a/google/services/clouddeploy/resource_clouddeploy_target_generated_test.go b/google/services/clouddeploy/resource_clouddeploy_target_generated_test.go index 795efda5226..d40a08a14c6 100644 --- a/google/services/clouddeploy/resource_clouddeploy_target_generated_test.go +++ b/google/services/clouddeploy/resource_clouddeploy_target_generated_test.go @@ -51,41 +51,46 @@ func TestAccClouddeployTarget_Target(t *testing.T) { Config: testAccClouddeployTarget_Target(context), }, { - ResourceName: "google_clouddeploy_target.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, { Config: testAccClouddeployTarget_TargetUpdate0(context), }, { - ResourceName: "google_clouddeploy_target.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, { Config: testAccClouddeployTarget_TargetUpdate1(context), }, { - ResourceName: "google_clouddeploy_target.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, { Config: testAccClouddeployTarget_TargetUpdate2(context), }, { - ResourceName: "google_clouddeploy_target.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, { Config: testAccClouddeployTarget_TargetUpdate3(context), }, { - ResourceName: "google_clouddeploy_target.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, }, }, }) @@ -97,12 +102,6 @@ resource "google_clouddeploy_target" "primary" { location = "%{region}" name = "tf-test-target%{random_suffix}" - annotations = { - my_first_annotation = "example-annotation-1" - - my_second_annotation = "example-annotation-2" - } - deploy_parameters = { deployParameterKey = "deployParameterValue" } @@ -113,14 +112,20 @@ resource "google_clouddeploy_target" "primary" { cluster = "projects/%{project_name}/locations/%{region}/clusters/example-cluster-name" } + project = "%{project_name}" + require_approval = false + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + labels = { my_first_label = "example-label-1" my_second_label = "example-label-2" } - - project = "%{project_name}" - require_approval = false } @@ -130,15 +135,8 @@ resource "google_clouddeploy_target" "primary" { func testAccClouddeployTarget_TargetUpdate0(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_clouddeploy_target" "primary" { - location = "%{region}" - name = "tf-test-target%{random_suffix}" - - annotations = { - my_second_annotation = "updated-example-annotation-2" - - my_third_annotation = "example-annotation-3" - } - + location = "%{region}" + name = "tf-test-target%{random_suffix}" deploy_parameters = {} description = "updated description" @@ -147,14 +145,20 @@ resource "google_clouddeploy_target" "primary" { internal_ip = true } + project = "%{project_name}" + require_approval = true + + annotations = { + my_second_annotation = "updated-example-annotation-2" + + my_third_annotation = "example-annotation-3" + } + labels = { my_second_label = "updated-example-label-2" my_third_label = "example-label-3" } - - project = "%{project_name}" - require_approval = true } @@ -164,15 +168,8 @@ resource "google_clouddeploy_target" "primary" { func testAccClouddeployTarget_TargetUpdate1(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_clouddeploy_target" "primary" { - location = "%{region}" - name = "tf-test-target%{random_suffix}" - - annotations = { - my_second_annotation = "updated-example-annotation-2" - - my_third_annotation = "example-annotation-3" - } - + location = "%{region}" + name = "tf-test-target%{random_suffix}" deploy_parameters = {} description = "updated description" @@ -187,14 +184,20 @@ resource "google_clouddeploy_target" "primary" { internal_ip = true } + project = "%{project_name}" + require_approval = true + + annotations = { + my_second_annotation = "updated-example-annotation-2" + + my_third_annotation = "example-annotation-3" + } + labels = { my_second_label = "updated-example-label-2" my_third_label = "example-label-3" } - - project = "%{project_name}" - require_approval = true } @@ -204,15 +207,8 @@ resource "google_clouddeploy_target" "primary" { func testAccClouddeployTarget_TargetUpdate2(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_clouddeploy_target" "primary" { - location = "%{region}" - name = "tf-test-target%{random_suffix}" - - annotations = { - my_second_annotation = "updated-example-annotation-2" - - my_third_annotation = "example-annotation-3" - } - + location = "%{region}" + name = "tf-test-target%{random_suffix}" deploy_parameters = {} description = "updated description" @@ -234,14 +230,20 @@ resource "google_clouddeploy_target" "primary" { internal_ip = true } + project = "%{project_name}" + require_approval = true + + annotations = { + my_second_annotation = "updated-example-annotation-2" + + my_third_annotation = "example-annotation-3" + } + labels = { my_second_label = "updated-example-label-2" my_third_label = "example-label-3" } - - project = "%{project_name}" - require_approval = true } @@ -251,15 +253,8 @@ resource "google_clouddeploy_target" "primary" { func testAccClouddeployTarget_TargetUpdate3(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_clouddeploy_target" "primary" { - location = "%{region}" - name = "tf-test-target%{random_suffix}" - - annotations = { - my_second_annotation = "updated-example-annotation-2" - - my_third_annotation = "example-annotation-3" - } - + location = "%{region}" + name = "tf-test-target%{random_suffix}" deploy_parameters = {} description = "updated description" @@ -281,14 +276,20 @@ resource "google_clouddeploy_target" "primary" { internal_ip = true } + project = "%{project_name}" + require_approval = true + + annotations = { + my_second_annotation = "updated-example-annotation-2" + + my_third_annotation = "example-annotation-3" + } + labels = { my_second_label = "updated-example-label-2" my_third_label = "example-label-3" } - - project = "%{project_name}" - require_approval = true } diff --git a/google/services/clouddeploy/resource_clouddeploy_target_test.go b/google/services/clouddeploy/resource_clouddeploy_target_test.go new file mode 100644 index 00000000000..5ff0ab66379 --- /dev/null +++ b/google/services/clouddeploy/resource_clouddeploy_target_test.go @@ -0,0 +1,280 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package clouddeploy_test + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + "github.com/hashicorp/terraform-provider-google/google/acctest" + "github.com/hashicorp/terraform-provider-google/google/envvar" +) + +func TestAccClouddeployTarget_withProviderDefaultLabels(t *testing.T) { + // The test failed if VCR testing is enabled, because the cached provider config is used. + // Any changes in the provider default labels will not be applied. + acctest.SkipIfVcr(t) + t.Parallel() + + context := map[string]interface{}{ + "project_name": envvar.GetTestProjectFromEnv(), + "region": envvar.GetTestRegionFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckClouddeployTargetDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccClouddeployTarget_withProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.%", "2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_second_label", "example-label-2"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_second_label", "example-label-2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.default_key1", "default_value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, + }, + { + Config: testAccClouddeployTarget_resourceLabelsOverridesProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.%", "3"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_second_label", "example-label-2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_second_label", "example-label-2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, + }, + { + Config: testAccClouddeployTarget_moveResourceLabelToProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.%", "2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_second_label", "example-label-2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, + }, + { + Config: testAccClouddeployTarget_resourceLabelsOverridesProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.%", "3"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.my_second_label", "example-label-2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_first_label", "example-label-1"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.my_second_label", "example-label-2"), + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "terraform_labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_clouddeploy_target.primary", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, + }, + { + Config: testAccClouddeployTarget_withoutLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_clouddeploy_target.primary", "labels.%"), + resource.TestCheckNoResourceAttr("google_clouddeploy_target.primary", "terraform_labels.%"), + resource.TestCheckNoResourceAttr("google_clouddeploy_target.primary", "effective_labels.%"), + ), + }, + { + ResourceName: "google_clouddeploy_target.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "annotations"}, + }, + }, + }) +} + +func testAccClouddeployTarget_withProviderDefaultLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + +resource "google_clouddeploy_target" "primary" { + location = "%{region}" + name = "tf-test-target%{random_suffix}" + + deploy_parameters = { + deployParameterKey = "deployParameterValue" + } + + description = "basic description" + + gke { + cluster = "projects/%{project_name}/locations/%{region}/clusters/example-cluster-name" + } + + project = "%{project_name}" + require_approval = false + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + + labels = { + my_first_label = "example-label-1" + my_second_label = "example-label-2" + } +} +`, context) +} + +func testAccClouddeployTarget_resourceLabelsOverridesProviderDefaultLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + +resource "google_clouddeploy_target" "primary" { + location = "%{region}" + name = "tf-test-target%{random_suffix}" + + deploy_parameters = { + deployParameterKey = "deployParameterValue" + } + + description = "basic description" + + gke { + cluster = "projects/%{project_name}/locations/%{region}/clusters/example-cluster-name" + } + + project = "%{project_name}" + require_approval = false + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + + labels = { + my_first_label = "example-label-1" + my_second_label = "example-label-2" + default_key1 = "value1" + } +} +`, context) +} + +func testAccClouddeployTarget_moveResourceLabelToProviderDefaultLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + my_second_label = "example-label-2" + } +} + +resource "google_clouddeploy_target" "primary" { + location = "%{region}" + name = "tf-test-target%{random_suffix}" + + deploy_parameters = { + deployParameterKey = "deployParameterValue" + } + + description = "basic description" + + gke { + cluster = "projects/%{project_name}/locations/%{region}/clusters/example-cluster-name" + } + + project = "%{project_name}" + require_approval = false + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + + labels = { + my_first_label = "example-label-1" + default_key1 = "value1" + } +} +`, context) +} + +func testAccClouddeployTarget_withoutLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_clouddeploy_target" "primary" { + location = "%{region}" + name = "tf-test-target%{random_suffix}" + + deploy_parameters = { + deployParameterKey = "deployParameterValue" + } + + description = "basic description" + + gke { + cluster = "projects/%{project_name}/locations/%{region}/clusters/example-cluster-name" + } + + project = "%{project_name}" + require_approval = false + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } +} +`, context) +} diff --git a/google/services/cloudfunctions/data_source_google_cloudfunctions_function.go b/google/services/cloudfunctions/data_source_google_cloudfunctions_function.go index 9414e533fb0..36299741a8d 100644 --- a/google/services/cloudfunctions/data_source_google_cloudfunctions_function.go +++ b/google/services/cloudfunctions/data_source_google_cloudfunctions_function.go @@ -3,6 +3,8 @@ package cloudfunctions import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -50,5 +52,13 @@ func dataSourceGoogleCloudFunctionsFunctionRead(d *schema.ResourceData, meta int return err } + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", cloudFuncId.CloudFunctionId()) + } + return nil } diff --git a/google/services/cloudfunctions/data_source_google_cloudfunctions_function_test.go b/google/services/cloudfunctions/data_source_google_cloudfunctions_function_test.go index 7bac05c42a7..64c22d85369 100644 --- a/google/services/cloudfunctions/data_source_google_cloudfunctions_function_test.go +++ b/google/services/cloudfunctions/data_source_google_cloudfunctions_function_test.go @@ -63,6 +63,9 @@ resource "google_cloudfunctions_function" "function_http" { trigger_http = true timeout = 61 entry_point = "helloGET" + labels = { + my-label = "my-label-value" + } } data "google_cloudfunctions_function" "function_http" { diff --git a/google/services/cloudfunctions/resource_cloudfunctions_function.go b/google/services/cloudfunctions/resource_cloudfunctions_function.go index 4c5baa63e81..bdc85298a3c 100644 --- a/google/services/cloudfunctions/resource_cloudfunctions_function.go +++ b/google/services/cloudfunctions/resource_cloudfunctions_function.go @@ -5,6 +5,7 @@ package cloudfunctions import ( "regexp" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -143,6 +144,12 @@ func ResourceCloudFunctionsFunction() *schema.Resource { Delete: schema.DefaultTimeout(5 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -259,7 +266,24 @@ func ResourceCloudFunctionsFunction() *schema.Resource { Type: schema.TypeMap, ValidateFunc: labelKeyValidator, Optional: true, - Description: `A set of key/value label pairs to assign to the function. Label keys must follow the requirements at https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements.`, + Description: `A set of key/value label pairs to assign to the function. Label keys must follow the requirements at https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "runtime": { @@ -560,8 +584,8 @@ func resourceCloudFunctionsCreate(d *schema.ResourceData, meta interface{}) erro function.IngressSettings = v.(string) } - if _, ok := d.GetOk("labels"); ok { - function.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + function.Labels = tpgresource.ExpandEffectiveLabels(d) } if _, ok := d.GetOk("environment_variables"); ok { @@ -672,9 +696,15 @@ func resourceCloudFunctionsRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("ingress_settings", function.IngressSettings); err != nil { return fmt.Errorf("Error setting ingress_settings: %s", err) } - if err := d.Set("labels", function.Labels); err != nil { + if err := tpgresource.SetLabels(function.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(function.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", function.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if err := d.Set("runtime", function.Runtime); err != nil { return fmt.Errorf("Error setting runtime: %s", err) } @@ -841,8 +871,8 @@ func resourceCloudFunctionsUpdate(d *schema.ResourceData, meta interface{}) erro updateMaskArr = append(updateMaskArr, "ingressSettings") } - if d.HasChange("labels") { - function.Labels = tpgresource.ExpandLabels(d) + if d.HasChange("effective_labels") { + function.Labels = tpgresource.ExpandEffectiveLabels(d) updateMaskArr = append(updateMaskArr, "labels") } diff --git a/google/services/cloudfunctions/resource_cloudfunctions_function_test.go b/google/services/cloudfunctions/resource_cloudfunctions_function_test.go index 45bb3467c26..5949bf0bc88 100644 --- a/google/services/cloudfunctions/resource_cloudfunctions_function_test.go +++ b/google/services/cloudfunctions/resource_cloudfunctions_function_test.go @@ -81,7 +81,7 @@ func TestAccCloudFunctionsFunction_basic(t *testing.T) { ResourceName: funcResourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"build_environment_variables"}, + ImportStateVerifyIgnore: []string{"build_environment_variables", "labels", "terraform_labels"}, }, }, }) @@ -118,7 +118,7 @@ func TestAccCloudFunctionsFunction_update(t *testing.T) { ResourceName: funcResourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"build_environment_variables"}, + ImportStateVerifyIgnore: []string{"build_environment_variables", "labels", "terraform_labels"}, }, { Config: testAccCloudFunctionsFunction_updated(functionName, bucketName, zipFileUpdatePath, random_suffix), @@ -151,7 +151,7 @@ func TestAccCloudFunctionsFunction_update(t *testing.T) { ResourceName: funcResourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"build_environment_variables"}, + ImportStateVerifyIgnore: []string{"build_environment_variables", "labels", "terraform_labels"}, }, }, }) @@ -367,7 +367,7 @@ func TestAccCloudFunctionsFunction_vpcConnector(t *testing.T) { ResourceName: funcResourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"build_environment_variables"}, + ImportStateVerifyIgnore: []string{"build_environment_variables", "labels", "terraform_labels"}, }, { Config: testAccCloudFunctionsFunction_vpcConnector(projectNumber, networkName, functionName, bucketName, zipFilePath, "10.20.0.0/28", vpcConnectorName+"-update"), @@ -376,7 +376,7 @@ func TestAccCloudFunctionsFunction_vpcConnector(t *testing.T) { ResourceName: funcResourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"build_environment_variables"}, + ImportStateVerifyIgnore: []string{"build_environment_variables", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function.go b/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function.go index c604aa179ce..8ceb215661b 100644 --- a/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function.go +++ b/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function.go @@ -34,12 +34,21 @@ func dataSourceGoogleCloudFunctions2FunctionRead(d *schema.ResourceData, meta in return err } - d.SetId(fmt.Sprintf("projects/%s/locations/%s/functions/%s", project, d.Get("location").(string), d.Get("name").(string))) + id := fmt.Sprintf("projects/%s/locations/%s/functions/%s", project, d.Get("location").(string), d.Get("name").(string)) + d.SetId(id) err = resourceCloudfunctions2functionRead(d, meta) if err != nil { return err } + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function_test.go b/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function_test.go index b851c22d40b..fbf009d0c86 100644 --- a/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function_test.go +++ b/google/services/cloudfunctions2/data_source_google_cloudfunctions2_function_test.go @@ -26,9 +26,11 @@ func TestAccDataSourceGoogleCloudFunctions2Function_basic(t *testing.T) { { Config: testAccDataSourceGoogleCloudFunctions2FunctionConfig(functionName, bucketName, zipFilePath), + // As the value of "labels" and "terraform_labels" in the state of the data source are all labels, + // but the "labels" field in resource are user defined labels, which is the reason for the mismatch. Check: resource.ComposeTestCheckFunc( acctest.CheckDataSourceStateMatchesResourceStateWithIgnores(funcDataNameHttp, - "google_cloudfunctions2_function.function_http_v2", map[string]struct{}{"build_config.0.source.0.storage_source.0.bucket": {}, "build_config.0.source.0.storage_source.0.object": {}}), + "google_cloudfunctions2_function.function_http_v2", map[string]struct{}{"build_config.0.source.0.storage_source.0.bucket": {}, "build_config.0.source.0.storage_source.0.object": {}, "labels.%": {}, "terraform_labels.%": {}}), ), }, }, @@ -37,6 +39,12 @@ func TestAccDataSourceGoogleCloudFunctions2Function_basic(t *testing.T) { func testAccDataSourceGoogleCloudFunctions2FunctionConfig(functionName, bucketName, zipFilePath string) string { return fmt.Sprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + resource "google_storage_bucket" "bucket" { name = "%s" location = "US" @@ -52,7 +60,9 @@ resource "google_cloudfunctions2_function" "function_http_v2" { name = "%s" location = "us-central1" description = "a new function" - + labels = { + env = "test" + } build_config { runtime = "nodejs12" entry_point = "helloHttp" diff --git a/google/services/cloudfunctions2/resource_cloudfunctions2_function.go b/google/services/cloudfunctions2/resource_cloudfunctions2_function.go index 064f081e87a..567fcbc6607 100644 --- a/google/services/cloudfunctions2/resource_cloudfunctions2_function.go +++ b/google/services/cloudfunctions2/resource_cloudfunctions2_function.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,7 +49,18 @@ func ResourceCloudfunctions2function() *schema.Resource { Delete: schema.DefaultTimeout(60 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The location of this cloud function.`, + }, "name": { Type: schema.TypeString, Required: true, @@ -263,16 +275,14 @@ region. If not provided, defaults to the same region as the function.`, It must match the pattern projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs associated with this Cloud Function.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "location": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `The location of this cloud function.`, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs associated with this Cloud Function. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "service_config": { Type: schema.TypeList, @@ -450,6 +460,12 @@ timeout period. Defaults to 60 seconds.`, }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "environment": { Type: schema.TypeString, Computed: true, @@ -460,6 +476,13 @@ timeout period. Defaults to 60 seconds.`, Computed: true, Description: `Describes the current state of the function.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -548,18 +571,18 @@ func resourceCloudfunctions2functionCreate(d *schema.ResourceData, meta interfac } else if v, ok := d.GetOkExists("event_trigger"); !tpgresource.IsEmptyValue(reflect.ValueOf(eventTriggerProp)) && (ok || !reflect.DeepEqual(v, eventTriggerProp)) { obj["eventTrigger"] = eventTriggerProp } - labelsProp, err := expandCloudfunctions2functionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } kmsKeyNameProp, err := expandCloudfunctions2functionKmsKeyName(d.Get("kms_key_name"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("kms_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(kmsKeyNameProp)) && (ok || !reflect.DeepEqual(v, kmsKeyNameProp)) { obj["kmsKeyName"] = kmsKeyNameProp } + labelsProp, err := expandCloudfunctions2functionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{Cloudfunctions2BasePath}}projects/{{project}}/locations/{{location}}/functions?functionId={{name}}") if err != nil { @@ -699,6 +722,12 @@ func resourceCloudfunctions2functionRead(d *schema.ResourceData, meta interface{ if err := d.Set("kms_key_name", flattenCloudfunctions2functionKmsKeyName(res["kmsKeyName"], d, config)); err != nil { return fmt.Errorf("Error reading function: %s", err) } + if err := d.Set("terraform_labels", flattenCloudfunctions2functionTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading function: %s", err) + } + if err := d.Set("effective_labels", flattenCloudfunctions2functionEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading function: %s", err) + } return nil } @@ -743,18 +772,18 @@ func resourceCloudfunctions2functionUpdate(d *schema.ResourceData, meta interfac } else if v, ok := d.GetOkExists("event_trigger"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, eventTriggerProp)) { obj["eventTrigger"] = eventTriggerProp } - labelsProp, err := expandCloudfunctions2functionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } kmsKeyNameProp, err := expandCloudfunctions2functionKmsKeyName(d.Get("kms_key_name"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("kms_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, kmsKeyNameProp)) { obj["kmsKeyName"] = kmsKeyNameProp } + labelsProp, err := expandCloudfunctions2functionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{Cloudfunctions2BasePath}}projects/{{project}}/locations/{{location}}/functions/{{name}}") if err != nil { @@ -780,13 +809,13 @@ func resourceCloudfunctions2functionUpdate(d *schema.ResourceData, meta interfac updateMask = append(updateMask, "eventTrigger") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("kms_key_name") { updateMask = append(updateMask, "kmsKeyName") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -882,9 +911,9 @@ func resourceCloudfunctions2functionDelete(d *schema.ResourceData, meta interfac func resourceCloudfunctions2functionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/functions/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/functions/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1448,13 +1477,43 @@ func flattenCloudfunctions2functionUpdateTime(v interface{}, d *schema.ResourceD } func flattenCloudfunctions2functionLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudfunctions2functionKmsKeyName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } +func flattenCloudfunctions2functionTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCloudfunctions2functionEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandCloudfunctions2functionName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/functions/{{name}}") } @@ -2198,7 +2257,11 @@ func expandCloudfunctions2functionEventTriggerRetryPolicy(v interface{}, d tpgre return v, nil } -func expandCloudfunctions2functionLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandCloudfunctions2functionKmsKeyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandCloudfunctions2functionEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -2208,7 +2271,3 @@ func expandCloudfunctions2functionLabels(v interface{}, d tpgresource.TerraformR } return m, nil } - -func expandCloudfunctions2functionKmsKeyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/cloudfunctions2/resource_cloudfunctions2_function_generated_test.go b/google/services/cloudfunctions2/resource_cloudfunctions2_function_generated_test.go index d61ed091b0c..c0a426ff3d0 100644 --- a/google/services/cloudfunctions2/resource_cloudfunctions2_function_generated_test.go +++ b/google/services/cloudfunctions2/resource_cloudfunctions2_function_generated_test.go @@ -53,7 +53,7 @@ func TestAccCloudfunctions2function_cloudfunctions2BasicExample(t *testing.T) { ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -129,7 +129,7 @@ func TestAccCloudfunctions2function_cloudfunctions2FullExample(t *testing.T) { ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -229,7 +229,7 @@ func TestAccCloudfunctions2function_cloudfunctions2BasicGcsExample(t *testing.T) ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -366,7 +366,7 @@ func TestAccCloudfunctions2function_cloudfunctions2BasicAuditlogsExample(t *test ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -508,7 +508,7 @@ func TestAccCloudfunctions2function_cloudfunctions2SecretEnvExample(t *testing.T ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -607,7 +607,7 @@ func TestAccCloudfunctions2function_cloudfunctions2SecretVolumeExample(t *testin ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -704,7 +704,7 @@ func TestAccCloudfunctions2function_cloudfunctions2PrivateWorkerpoolExample(t *t ResourceName: "google_cloudfunctions2_function.function", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/cloudfunctions2/resource_cloudfunctions2_function_test.go b/google/services/cloudfunctions2/resource_cloudfunctions2_function_test.go index e85e419cf5f..eb5e31432cf 100644 --- a/google/services/cloudfunctions2/resource_cloudfunctions2_function_test.go +++ b/google/services/cloudfunctions2/resource_cloudfunctions2_function_test.go @@ -30,7 +30,7 @@ func TestAccCloudFunctions2Function_update(t *testing.T) { ResourceName: "google_cloudfunctions2_function.terraform-test2", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, { Config: testAccCloudFunctions2Function_test_update(context), @@ -39,7 +39,7 @@ func TestAccCloudFunctions2Function_update(t *testing.T) { ResourceName: "google_cloudfunctions2_function.terraform-test2", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, { Config: testAccCloudFunctions2Function_test_redeploy(context), @@ -48,7 +48,7 @@ func TestAccCloudFunctions2Function_update(t *testing.T) { ResourceName: "google_cloudfunctions2_function.terraform-test2", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket"}, + ImportStateVerifyIgnore: []string{"location", "build_config.0.source.0.storage_source.0.object", "build_config.0.source.0.storage_source.0.bucket", "labels", "terraform_labels"}, }, }, }) @@ -72,6 +72,9 @@ resource "google_cloudfunctions2_function" "terraform-test2" { name = "tf-test-test-function%{random_suffix}" location = "us-central1" description = "a new function" + labels = { + env = "test" + } build_config { runtime = "nodejs12" @@ -111,7 +114,10 @@ resource "google_cloudfunctions2_function" "terraform-test2" { name = "tf-test-test-function%{random_suffix}" location = "us-central1" description = "an updated function" - + labels = { + env = "test-update" + } + build_config { runtime = "nodejs12" entry_point = "helloHttp" diff --git a/google/services/cloudidentity/data_source_cloud_identity_group_memberships.go b/google/services/cloudidentity/data_source_cloud_identity_group_memberships.go index f48367d3c58..4a3c152ecee 100644 --- a/google/services/cloudidentity/data_source_cloud_identity_group_memberships.go +++ b/google/services/cloudidentity/data_source_cloud_identity_group_memberships.go @@ -79,7 +79,7 @@ func dataSourceGoogleCloudIdentityGroupMembershipsRead(d *schema.ResourceData, m return nil }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("CloudIdentityGroupMemberships %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("CloudIdentityGroupMemberships %q", d.Id()), "") } if err := d.Set("memberships", result); err != nil { diff --git a/google/services/cloudidentity/data_source_cloud_identity_groups.go b/google/services/cloudidentity/data_source_cloud_identity_groups.go index 29e6b7029ca..026bc69541d 100644 --- a/google/services/cloudidentity/data_source_cloud_identity_groups.go +++ b/google/services/cloudidentity/data_source_cloud_identity_groups.go @@ -81,7 +81,7 @@ func dataSourceGoogleCloudIdentityGroupsRead(d *schema.ResourceData, meta interf return nil }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("CloudIdentityGroups %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("CloudIdentityGroups %q", d.Id()), "Groups") } if err := d.Set("groups", result); err != nil { diff --git a/google/services/cloudidentity/resource_cloud_identity_group_membership.go b/google/services/cloudidentity/resource_cloud_identity_group_membership.go index 1e7c042b221..1e4046f1fae 100644 --- a/google/services/cloudidentity/resource_cloud_identity_group_membership.go +++ b/google/services/cloudidentity/resource_cloud_identity_group_membership.go @@ -374,7 +374,7 @@ func resourceCloudIdentityGroupMembershipDelete(d *schema.ResourceData, meta int func resourceCloudIdentityGroupMembershipImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)", + "^(?P.+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/cloudids/resource_cloud_ids_endpoint.go b/google/services/cloudids/resource_cloud_ids_endpoint.go index 6b4b3a9e4be..d9435932857 100644 --- a/google/services/cloudids/resource_cloud_ids_endpoint.go +++ b/google/services/cloudids/resource_cloud_ids_endpoint.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceCloudIdsEndpoint() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -426,9 +431,9 @@ func resourceCloudIdsEndpointDelete(d *schema.ResourceData, meta interface{}) er func resourceCloudIdsEndpointImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/endpoints/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/endpoints/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/cloudids/resource_cloudids_endpoint_test.go b/google/services/cloudids/resource_cloudids_endpoint_test.go index 83b0e36f28b..914eba7eb5e 100644 --- a/google/services/cloudids/resource_cloudids_endpoint_test.go +++ b/google/services/cloudids/resource_cloudids_endpoint_test.go @@ -20,7 +20,7 @@ func TestAccCloudIdsEndpoint_basic(t *testing.T) { context := map[string]interface{}{ "random_suffix": acctest.RandString(t, 10), - "network_name": acctest.BootstrapSharedTestNetwork(t, "cloud-ids-endpoint"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "cloud-ids-endpoint-1"), } acctest.VcrTest(t, resource.TestCase{ @@ -53,18 +53,6 @@ func testCloudIds_basic(context map[string]interface{}) string { data "google_compute_network" "default" { name = "%{network_name}" } -resource "google_compute_global_address" "service_range" { - name = "tf-test-address%{random_suffix}" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.default.id -} -resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.service_range.name] -} resource "google_cloud_ids_endpoint" "endpoint" { name = "cloud-ids-test-%{random_suffix}" @@ -72,7 +60,6 @@ resource "google_cloud_ids_endpoint" "endpoint" { network = data.google_compute_network.default.id severity = "INFORMATIONAL" threat_exceptions = ["12", "67"] - depends_on = [google_service_networking_connection.private_service_connection] } `, context) } @@ -82,18 +69,6 @@ func testCloudIds_basicUpdate(context map[string]interface{}) string { data "google_compute_network" "default" { name = "%{network_name}" } -resource "google_compute_global_address" "service_range" { - name = "tf-test-address%{random_suffix}" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.default.id -} -resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.service_range.name] -} resource "google_cloud_ids_endpoint" "endpoint" { name = "cloud-ids-test-%{random_suffix}" @@ -101,7 +76,6 @@ resource "google_cloud_ids_endpoint" "endpoint" { network = data.google_compute_network.default.id severity = "INFORMATIONAL" threat_exceptions = ["33"] - depends_on = [google_service_networking_connection.private_service_connection] } `, context) } diff --git a/google/services/cloudiot/iam_cloudiot_registry.go b/google/services/cloudiot/iam_cloudiot_registry.go deleted file mode 100644 index 84b3210e7b3..00000000000 --- a/google/services/cloudiot/iam_cloudiot_registry.go +++ /dev/null @@ -1,245 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot - -import ( - "fmt" - - "github.com/hashicorp/errwrap" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "google.golang.org/api/cloudresourcemanager/v1" - - "github.com/hashicorp/terraform-provider-google/google/tpgiamresource" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -var CloudIotDeviceRegistryIamSchema = map[string]*schema.Schema{ - "project": { - Type: schema.TypeString, - Computed: true, - Optional: true, - ForceNew: true, - }, - "region": { - Type: schema.TypeString, - Computed: true, - Optional: true, - ForceNew: true, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, - }, -} - -type CloudIotDeviceRegistryIamUpdater struct { - project string - region string - name string - d tpgresource.TerraformResourceData - Config *transport_tpg.Config -} - -func CloudIotDeviceRegistryIamUpdaterProducer(d tpgresource.TerraformResourceData, config *transport_tpg.Config) (tpgiamresource.ResourceIamUpdater, error) { - values := make(map[string]string) - - project, _ := tpgresource.GetProject(d, config) - if project != "" { - if err := d.Set("project", project); err != nil { - return nil, fmt.Errorf("Error setting project: %s", err) - } - } - values["project"] = project - region, _ := tpgresource.GetRegion(d, config) - if region != "" { - if err := d.Set("region", region); err != nil { - return nil, fmt.Errorf("Error setting region: %s", err) - } - } - values["region"] = region - if v, ok := d.GetOk("name"); ok { - values["name"] = v.(string) - } - - // We may have gotten either a long or short name, so attempt to parse long name if possible - m, err := tpgresource.GetImportIdQualifiers([]string{"projects/(?P[^/]+)/locations/(?P[^/]+)/registries/(?P[^/]+)", "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", "(?P[^/]+)/(?P[^/]+)", "(?P[^/]+)"}, d, config, d.Get("name").(string)) - if err != nil { - return nil, err - } - - for k, v := range m { - values[k] = v - } - - u := &CloudIotDeviceRegistryIamUpdater{ - project: values["project"], - region: values["region"], - name: values["name"], - d: d, - Config: config, - } - - if err := d.Set("project", u.project); err != nil { - return nil, fmt.Errorf("Error setting project: %s", err) - } - if err := d.Set("region", u.region); err != nil { - return nil, fmt.Errorf("Error setting region: %s", err) - } - if err := d.Set("name", u.GetResourceId()); err != nil { - return nil, fmt.Errorf("Error setting name: %s", err) - } - - return u, nil -} - -func CloudIotDeviceRegistryIdParseFunc(d *schema.ResourceData, config *transport_tpg.Config) error { - values := make(map[string]string) - - project, _ := tpgresource.GetProject(d, config) - if project != "" { - values["project"] = project - } - - region, _ := tpgresource.GetRegion(d, config) - if region != "" { - values["region"] = region - } - - m, err := tpgresource.GetImportIdQualifiers([]string{"projects/(?P[^/]+)/locations/(?P[^/]+)/registries/(?P[^/]+)", "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", "(?P[^/]+)/(?P[^/]+)", "(?P[^/]+)"}, d, config, d.Id()) - if err != nil { - return err - } - - for k, v := range m { - values[k] = v - } - - u := &CloudIotDeviceRegistryIamUpdater{ - project: values["project"], - region: values["region"], - name: values["name"], - d: d, - Config: config, - } - if err := d.Set("name", u.GetResourceId()); err != nil { - return fmt.Errorf("Error setting name: %s", err) - } - d.SetId(u.GetResourceId()) - return nil -} - -func (u *CloudIotDeviceRegistryIamUpdater) GetResourceIamPolicy() (*cloudresourcemanager.Policy, error) { - url, err := u.qualifyDeviceRegistryUrl("getIamPolicy") - if err != nil { - return nil, err - } - - project, err := tpgresource.GetProject(u.d, u.Config) - if err != nil { - return nil, err - } - var obj map[string]interface{} - - userAgent, err := tpgresource.GenerateUserAgentString(u.d, u.Config.UserAgent) - if err != nil { - return nil, err - } - - policy, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: u.Config, - Method: "POST", - Project: project, - RawURL: url, - UserAgent: userAgent, - Body: obj, - }) - if err != nil { - return nil, errwrap.Wrapf(fmt.Sprintf("Error retrieving IAM policy for %s: {{err}}", u.DescribeResource()), err) - } - - out := &cloudresourcemanager.Policy{} - err = tpgresource.Convert(policy, out) - if err != nil { - return nil, errwrap.Wrapf("Cannot convert a policy to a resource manager policy: {{err}}", err) - } - - return out, nil -} - -func (u *CloudIotDeviceRegistryIamUpdater) SetResourceIamPolicy(policy *cloudresourcemanager.Policy) error { - json, err := tpgresource.ConvertToMap(policy) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - obj["policy"] = json - - url, err := u.qualifyDeviceRegistryUrl("setIamPolicy") - if err != nil { - return err - } - project, err := tpgresource.GetProject(u.d, u.Config) - if err != nil { - return err - } - - userAgent, err := tpgresource.GenerateUserAgentString(u.d, u.Config.UserAgent) - if err != nil { - return err - } - - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: u.Config, - Method: "POST", - Project: project, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: u.d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return errwrap.Wrapf(fmt.Sprintf("Error setting IAM policy for %s: {{err}}", u.DescribeResource()), err) - } - - return nil -} - -func (u *CloudIotDeviceRegistryIamUpdater) qualifyDeviceRegistryUrl(methodIdentifier string) (string, error) { - urlTemplate := fmt.Sprintf("{{CloudIotBasePath}}%s:%s", fmt.Sprintf("projects/%s/locations/%s/registries/%s", u.project, u.region, u.name), methodIdentifier) - url, err := tpgresource.ReplaceVars(u.d, u.Config, urlTemplate) - if err != nil { - return "", err - } - return url, nil -} - -func (u *CloudIotDeviceRegistryIamUpdater) GetResourceId() string { - return fmt.Sprintf("projects/%s/locations/%s/registries/%s", u.project, u.region, u.name) -} - -func (u *CloudIotDeviceRegistryIamUpdater) GetMutexKey() string { - return fmt.Sprintf("iam-cloudiot-deviceregistry-%s", u.GetResourceId()) -} - -func (u *CloudIotDeviceRegistryIamUpdater) DescribeResource() string { - return fmt.Sprintf("cloudiot deviceregistry %q", u.GetResourceId()) -} diff --git a/google/services/cloudiot/iam_cloudiot_registry_generated_test.go b/google/services/cloudiot/iam_cloudiot_registry_generated_test.go deleted file mode 100644 index ce7a173c0b6..00000000000 --- a/google/services/cloudiot/iam_cloudiot_registry_generated_test.go +++ /dev/null @@ -1,227 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot_test - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - - "github.com/hashicorp/terraform-provider-google/google/acctest" - "github.com/hashicorp/terraform-provider-google/google/envvar" -) - -func TestAccCloudIotDeviceRegistryIamBindingGenerated(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "role": "roles/viewer", - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDeviceRegistryIamBinding_basicGenerated(context), - }, - { - ResourceName: "google_cloudiot_registry_iam_binding.foo", - ImportStateId: fmt.Sprintf("projects/%s/locations/%s/registries/%s roles/viewer", envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv(), fmt.Sprintf("tf-test-cloudiot-registry%s", context["random_suffix"])), - ImportState: true, - ImportStateVerify: true, - }, - { - // Test Iam Binding update - Config: testAccCloudIotDeviceRegistryIamBinding_updateGenerated(context), - }, - { - ResourceName: "google_cloudiot_registry_iam_binding.foo", - ImportStateId: fmt.Sprintf("projects/%s/locations/%s/registries/%s roles/viewer", envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv(), fmt.Sprintf("tf-test-cloudiot-registry%s", context["random_suffix"])), - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccCloudIotDeviceRegistryIamMemberGenerated(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "role": "roles/viewer", - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Steps: []resource.TestStep{ - { - // Test Iam Member creation (no update for member, no need to test) - Config: testAccCloudIotDeviceRegistryIamMember_basicGenerated(context), - }, - { - ResourceName: "google_cloudiot_registry_iam_member.foo", - ImportStateId: fmt.Sprintf("projects/%s/locations/%s/registries/%s roles/viewer user:admin@hashicorptest.com", envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv(), fmt.Sprintf("tf-test-cloudiot-registry%s", context["random_suffix"])), - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func TestAccCloudIotDeviceRegistryIamPolicyGenerated(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "role": "roles/viewer", - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDeviceRegistryIamPolicy_basicGenerated(context), - Check: resource.TestCheckResourceAttrSet("data.google_cloudiot_registry_iam_policy.foo", "policy_data"), - }, - { - ResourceName: "google_cloudiot_registry_iam_policy.foo", - ImportStateId: fmt.Sprintf("projects/%s/locations/%s/registries/%s", envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv(), fmt.Sprintf("tf-test-cloudiot-registry%s", context["random_suffix"])), - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIotDeviceRegistryIamPolicy_emptyBinding(context), - }, - { - ResourceName: "google_cloudiot_registry_iam_policy.foo", - ImportStateId: fmt.Sprintf("projects/%s/locations/%s/registries/%s", envvar.GetTestProjectFromEnv(), envvar.GetTestRegionFromEnv(), fmt.Sprintf("tf-test-cloudiot-registry%s", context["random_suffix"])), - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func testAccCloudIotDeviceRegistryIamMember_basicGenerated(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" -} - -resource "google_cloudiot_registry_iam_member" "foo" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - role = "%{role}" - member = "user:admin@hashicorptest.com" -} -`, context) -} - -func testAccCloudIotDeviceRegistryIamPolicy_basicGenerated(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" -} - -data "google_iam_policy" "foo" { - binding { - role = "%{role}" - members = ["user:admin@hashicorptest.com"] - } -} - -resource "google_cloudiot_registry_iam_policy" "foo" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - policy_data = data.google_iam_policy.foo.policy_data -} - -data "google_cloudiot_registry_iam_policy" "foo" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - depends_on = [ - google_cloudiot_registry_iam_policy.foo - ] -} -`, context) -} - -func testAccCloudIotDeviceRegistryIamPolicy_emptyBinding(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" -} - -data "google_iam_policy" "foo" { -} - -resource "google_cloudiot_registry_iam_policy" "foo" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - policy_data = data.google_iam_policy.foo.policy_data -} -`, context) -} - -func testAccCloudIotDeviceRegistryIamBinding_basicGenerated(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" -} - -resource "google_cloudiot_registry_iam_binding" "foo" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - role = "%{role}" - members = ["user:admin@hashicorptest.com"] -} -`, context) -} - -func testAccCloudIotDeviceRegistryIamBinding_updateGenerated(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" -} - -resource "google_cloudiot_registry_iam_binding" "foo" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - role = "%{role}" - members = ["user:admin@hashicorptest.com", "user:gterraformtest1@gmail.com"] -} -`, context) -} diff --git a/google/services/cloudiot/resource_cloudiot_device.go b/google/services/cloudiot/resource_cloudiot_device.go deleted file mode 100644 index e9f5036689a..00000000000 --- a/google/services/cloudiot/resource_cloudiot_device.go +++ /dev/null @@ -1,961 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot - -import ( - "fmt" - "log" - "reflect" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" - "github.com/hashicorp/terraform-provider-google/google/verify" -) - -func ResourceCloudIotDevice() *schema.Resource { - return &schema.Resource{ - Create: resourceCloudIotDeviceCreate, - Read: resourceCloudIotDeviceRead, - Update: resourceCloudIotDeviceUpdate, - Delete: resourceCloudIotDeviceDelete, - - Importer: &schema.ResourceImporter{ - State: resourceCloudIotDeviceImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - DeprecationMessage: "`google_cloudiot_device` is deprecated in the API. This resource will be removed in the next major release of the provider.", - - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `A unique name for the resource.`, - }, - "registry": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `The name of the device registry where this device should be created.`, - }, - "blocked": { - Type: schema.TypeBool, - Optional: true, - Description: `If a device is blocked, connections or requests from this device will fail.`, - }, - "credentials": { - Type: schema.TypeList, - Optional: true, - Description: `The credentials used to authenticate this device.`, - MaxItems: 3, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "public_key": { - Type: schema.TypeList, - Required: true, - Description: `A public key used to verify the signature of JSON Web Tokens (JWTs).`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "format": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidateEnum([]string{"RSA_PEM", "RSA_X509_PEM", "ES256_PEM", "ES256_X509_PEM"}), - Description: `The format of the key. Possible values: ["RSA_PEM", "RSA_X509_PEM", "ES256_PEM", "ES256_X509_PEM"]`, - }, - "key": { - Type: schema.TypeString, - Required: true, - Description: `The key data.`, - }, - }, - }, - }, - "expiration_time": { - Type: schema.TypeString, - Computed: true, - Optional: true, - Description: `The time at which this credential becomes invalid.`, - }, - }, - }, - }, - "gateway_config": { - Type: schema.TypeList, - Optional: true, - Description: `Gateway-related configuration and state.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "gateway_auth_method": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"ASSOCIATION_ONLY", "DEVICE_AUTH_TOKEN_ONLY", "ASSOCIATION_AND_DEVICE_AUTH_TOKEN", ""}), - Description: `Indicates whether the device is a gateway. Possible values: ["ASSOCIATION_ONLY", "DEVICE_AUTH_TOKEN_ONLY", "ASSOCIATION_AND_DEVICE_AUTH_TOKEN"]`, - }, - "gateway_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"GATEWAY", "NON_GATEWAY", ""}), - Description: `Indicates whether the device is a gateway. Default value: "NON_GATEWAY" Possible values: ["GATEWAY", "NON_GATEWAY"]`, - Default: "NON_GATEWAY", - }, - "last_accessed_gateway_id": { - Type: schema.TypeString, - Computed: true, - Description: `The ID of the gateway the device accessed most recently.`, - }, - "last_accessed_gateway_time": { - Type: schema.TypeString, - Computed: true, - Description: `The most recent time at which the device accessed the gateway specified in last_accessed_gateway.`, - }, - }, - }, - }, - "log_level": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"NONE", "ERROR", "INFO", "DEBUG", ""}), - Description: `The logging verbosity for device activity. Possible values: ["NONE", "ERROR", "INFO", "DEBUG"]`, - }, - "metadata": { - Type: schema.TypeMap, - Optional: true, - Description: `The metadata key-value pairs assigned to the device.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "config": { - Type: schema.TypeList, - Computed: true, - Description: `The most recent device configuration, which is eventually sent from Cloud IoT Core to the device.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "binary_data": { - Type: schema.TypeString, - Optional: true, - Description: `The device configuration data.`, - }, - "cloud_update_time": { - Type: schema.TypeString, - Computed: true, - Description: `The time at which this configuration version was updated in Cloud IoT Core.`, - }, - "device_ack_time": { - Type: schema.TypeString, - Computed: true, - Description: `The time at which Cloud IoT Core received the acknowledgment from the device, -indicating that the device has received this configuration version.`, - }, - "version": { - Type: schema.TypeString, - Computed: true, - Description: `The version of this update.`, - }, - }, - }, - }, - "last_config_ack_time": { - Type: schema.TypeString, - Computed: true, - Description: `The last time a cloud-to-device config version acknowledgment was received from the device.`, - }, - "last_config_send_time": { - Type: schema.TypeString, - Computed: true, - Description: `The last time a cloud-to-device config version was sent to the device.`, - }, - "last_error_status": { - Type: schema.TypeList, - Computed: true, - Description: `The error message of the most recent error, such as a failure to publish to Cloud Pub/Sub.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "details": { - Type: schema.TypeList, - Optional: true, - Description: `A list of messages that carry the error details.`, - Elem: &schema.Schema{ - Type: schema.TypeMap, - }, - }, - "message": { - Type: schema.TypeString, - Optional: true, - Description: `A developer-facing error message, which should be in English.`, - }, - "number": { - Type: schema.TypeInt, - Optional: true, - Description: `The status code, which should be an enum value of google.rpc.Code.`, - }, - }, - }, - }, - "last_error_time": { - Type: schema.TypeString, - Computed: true, - Description: `The time the most recent error occurred, such as a failure to publish to Cloud Pub/Sub.`, - }, - "last_event_time": { - Type: schema.TypeString, - Computed: true, - Description: `The last time a telemetry event was received.`, - }, - "last_heartbeat_time": { - Type: schema.TypeString, - Computed: true, - Description: `The last time an MQTT PINGREQ was received.`, - }, - "last_state_time": { - Type: schema.TypeString, - Computed: true, - Description: `The last time a state event was received.`, - }, - "num_id": { - Type: schema.TypeString, - Computed: true, - Description: `A server-defined unique numeric ID for the device. -This is a more compact way to identify devices, and it is globally unique.`, - }, - "state": { - Type: schema.TypeList, - Computed: true, - Description: `The state most recently received from the device.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "binary_data": { - Type: schema.TypeString, - Optional: true, - Description: `The device state data.`, - }, - "update_time": { - Type: schema.TypeString, - Optional: true, - Description: `The time at which this state version was updated in Cloud IoT Core.`, - }, - }, - }, - }, - }, - UseJSONNumber: true, - } -} - -func resourceCloudIotDeviceCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - idProp, err := expandCloudIotDeviceName(d.Get("name"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("name"); !tpgresource.IsEmptyValue(reflect.ValueOf(idProp)) && (ok || !reflect.DeepEqual(v, idProp)) { - obj["id"] = idProp - } - credentialsProp, err := expandCloudIotDeviceCredentials(d.Get("credentials"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("credentials"); !tpgresource.IsEmptyValue(reflect.ValueOf(credentialsProp)) && (ok || !reflect.DeepEqual(v, credentialsProp)) { - obj["credentials"] = credentialsProp - } - blockedProp, err := expandCloudIotDeviceBlocked(d.Get("blocked"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("blocked"); !tpgresource.IsEmptyValue(reflect.ValueOf(blockedProp)) && (ok || !reflect.DeepEqual(v, blockedProp)) { - obj["blocked"] = blockedProp - } - logLevelProp, err := expandCloudIotDeviceLogLevel(d.Get("log_level"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("log_level"); !tpgresource.IsEmptyValue(reflect.ValueOf(logLevelProp)) && (ok || !reflect.DeepEqual(v, logLevelProp)) { - obj["logLevel"] = logLevelProp - } - metadataProp, err := expandCloudIotDeviceMetadata(d.Get("metadata"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("metadata"); !tpgresource.IsEmptyValue(reflect.ValueOf(metadataProp)) && (ok || !reflect.DeepEqual(v, metadataProp)) { - obj["metadata"] = metadataProp - } - gatewayConfigProp, err := expandCloudIotDeviceGatewayConfig(d.Get("gateway_config"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("gateway_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(gatewayConfigProp)) && (ok || !reflect.DeepEqual(v, gatewayConfigProp)) { - obj["gatewayConfig"] = gatewayConfigProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}{{registry}}/devices") - if err != nil { - return err - } - - log.Printf("[DEBUG] Creating new Device: %#v", obj) - billingProject := "" - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return fmt.Errorf("Error creating Device: %s", err) - } - - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "{{registry}}/devices/{{name}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Finished creating Device %q: %#v", d.Id(), res) - - return resourceCloudIotDeviceRead(d, meta) -} - -func resourceCloudIotDeviceRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}{{registry}}/devices/{{name}}") - if err != nil { - return err - } - - billingProject := "" - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("CloudIotDevice %q", d.Id())) - } - - if err := d.Set("name", flattenCloudIotDeviceName(res["id"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("num_id", flattenCloudIotDeviceNumId(res["numId"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("credentials", flattenCloudIotDeviceCredentials(res["credentials"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_heartbeat_time", flattenCloudIotDeviceLastHeartbeatTime(res["lastHeartbeatTime"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_event_time", flattenCloudIotDeviceLastEventTime(res["lastEventTime"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_state_time", flattenCloudIotDeviceLastStateTime(res["lastStateTime"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_config_ack_time", flattenCloudIotDeviceLastConfigAckTime(res["lastConfigAckTime"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_config_send_time", flattenCloudIotDeviceLastConfigSendTime(res["lastConfigSendTime"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("blocked", flattenCloudIotDeviceBlocked(res["blocked"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_error_time", flattenCloudIotDeviceLastErrorTime(res["lastErrorTime"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("last_error_status", flattenCloudIotDeviceLastErrorStatus(res["lastErrorStatus"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("config", flattenCloudIotDeviceConfig(res["config"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("state", flattenCloudIotDeviceState(res["state"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("log_level", flattenCloudIotDeviceLogLevel(res["logLevel"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("metadata", flattenCloudIotDeviceMetadata(res["metadata"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - if err := d.Set("gateway_config", flattenCloudIotDeviceGatewayConfig(res["gatewayConfig"], d, config)); err != nil { - return fmt.Errorf("Error reading Device: %s", err) - } - - return nil -} - -func resourceCloudIotDeviceUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - obj := make(map[string]interface{}) - credentialsProp, err := expandCloudIotDeviceCredentials(d.Get("credentials"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("credentials"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, credentialsProp)) { - obj["credentials"] = credentialsProp - } - blockedProp, err := expandCloudIotDeviceBlocked(d.Get("blocked"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("blocked"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, blockedProp)) { - obj["blocked"] = blockedProp - } - logLevelProp, err := expandCloudIotDeviceLogLevel(d.Get("log_level"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("log_level"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, logLevelProp)) { - obj["logLevel"] = logLevelProp - } - metadataProp, err := expandCloudIotDeviceMetadata(d.Get("metadata"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("metadata"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, metadataProp)) { - obj["metadata"] = metadataProp - } - gatewayConfigProp, err := expandCloudIotDeviceGatewayConfig(d.Get("gateway_config"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("gateway_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, gatewayConfigProp)) { - obj["gatewayConfig"] = gatewayConfigProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}{{registry}}/devices/{{name}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Updating Device %q: %#v", d.Id(), obj) - updateMask := []string{} - - if d.HasChange("credentials") { - updateMask = append(updateMask, "credentials") - } - - if d.HasChange("blocked") { - updateMask = append(updateMask, "blocked") - } - - if d.HasChange("log_level") { - updateMask = append(updateMask, "logLevel") - } - - if d.HasChange("metadata") { - updateMask = append(updateMask, "metadata") - } - - if d.HasChange("gateway_config") { - updateMask = append(updateMask, "gateway_config.gateway_auth_method") - } - // updateMask is a URL parameter but not present in the schema, so ReplaceVars - // won't set it - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutUpdate), - }) - - if err != nil { - return fmt.Errorf("Error updating Device %q: %s", d.Id(), err) - } else { - log.Printf("[DEBUG] Finished updating Device %q: %#v", d.Id(), res) - } - - return resourceCloudIotDeviceRead(d, meta) -} - -func resourceCloudIotDeviceDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}{{registry}}/devices/{{name}}") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting Device %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "Device") - } - - log.Printf("[DEBUG] Finished deleting Device %q: %#v", d.Id(), res) - return nil -} - -func resourceCloudIotDeviceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "(?P.+)/devices/(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "{{registry}}/devices/{{name}}") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenCloudIotDeviceName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceNumId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceCredentials(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "expiration_time": flattenCloudIotDeviceCredentialsExpirationTime(original["expirationTime"], d, config), - "public_key": flattenCloudIotDeviceCredentialsPublicKey(original["publicKey"], d, config), - }) - } - return transformed -} -func flattenCloudIotDeviceCredentialsExpirationTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceCredentialsPublicKey(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["format"] = - flattenCloudIotDeviceCredentialsPublicKeyFormat(original["format"], d, config) - transformed["key"] = - flattenCloudIotDeviceCredentialsPublicKeyKey(original["key"], d, config) - return []interface{}{transformed} -} -func flattenCloudIotDeviceCredentialsPublicKeyFormat(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceCredentialsPublicKeyKey(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastHeartbeatTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastEventTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastStateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastConfigAckTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastConfigSendTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceBlocked(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastErrorTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastErrorStatus(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["number"] = - flattenCloudIotDeviceLastErrorStatusNumber(original["number"], d, config) - transformed["message"] = - flattenCloudIotDeviceLastErrorStatusMessage(original["message"], d, config) - transformed["details"] = - flattenCloudIotDeviceLastErrorStatusDetails(original["details"], d, config) - return []interface{}{transformed} -} -func flattenCloudIotDeviceLastErrorStatusNumber(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudIotDeviceLastErrorStatusMessage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLastErrorStatusDetails(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["version"] = - flattenCloudIotDeviceConfigVersion(original["version"], d, config) - transformed["cloud_update_time"] = - flattenCloudIotDeviceConfigCloudUpdateTime(original["cloudUpdateTime"], d, config) - transformed["device_ack_time"] = - flattenCloudIotDeviceConfigDeviceAckTime(original["deviceAckTime"], d, config) - transformed["binary_data"] = - flattenCloudIotDeviceConfigBinaryData(original["binaryData"], d, config) - return []interface{}{transformed} -} -func flattenCloudIotDeviceConfigVersion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceConfigCloudUpdateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceConfigDeviceAckTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceConfigBinaryData(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["update_time"] = - flattenCloudIotDeviceStateUpdateTime(original["updateTime"], d, config) - transformed["binary_data"] = - flattenCloudIotDeviceStateBinaryData(original["binaryData"], d, config) - return []interface{}{transformed} -} -func flattenCloudIotDeviceStateUpdateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceStateBinaryData(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceLogLevel(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceMetadata(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceGatewayConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["gateway_type"] = - flattenCloudIotDeviceGatewayConfigGatewayType(original["gatewayType"], d, config) - transformed["gateway_auth_method"] = - flattenCloudIotDeviceGatewayConfigGatewayAuthMethod(original["gatewayAuthMethod"], d, config) - transformed["last_accessed_gateway_id"] = - flattenCloudIotDeviceGatewayConfigLastAccessedGatewayId(original["lastAccessedGatewayId"], d, config) - transformed["last_accessed_gateway_time"] = - flattenCloudIotDeviceGatewayConfigLastAccessedGatewayTime(original["lastAccessedGatewayTime"], d, config) - return []interface{}{transformed} -} -func flattenCloudIotDeviceGatewayConfigGatewayType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceGatewayConfigGatewayAuthMethod(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceGatewayConfigLastAccessedGatewayId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceGatewayConfigLastAccessedGatewayTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandCloudIotDeviceName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceCredentials(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedExpirationTime, err := expandCloudIotDeviceCredentialsExpirationTime(original["expiration_time"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedExpirationTime); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["expirationTime"] = transformedExpirationTime - } - - transformedPublicKey, err := expandCloudIotDeviceCredentialsPublicKey(original["public_key"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPublicKey); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["publicKey"] = transformedPublicKey - } - - req = append(req, transformed) - } - return req, nil -} - -func expandCloudIotDeviceCredentialsExpirationTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceCredentialsPublicKey(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedFormat, err := expandCloudIotDeviceCredentialsPublicKeyFormat(original["format"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedFormat); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["format"] = transformedFormat - } - - transformedKey, err := expandCloudIotDeviceCredentialsPublicKeyKey(original["key"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedKey); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["key"] = transformedKey - } - - return transformed, nil -} - -func expandCloudIotDeviceCredentialsPublicKeyFormat(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceCredentialsPublicKeyKey(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceBlocked(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceLogLevel(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceMetadata(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - -func expandCloudIotDeviceGatewayConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedGatewayType, err := expandCloudIotDeviceGatewayConfigGatewayType(original["gateway_type"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedGatewayType); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["gatewayType"] = transformedGatewayType - } - - transformedGatewayAuthMethod, err := expandCloudIotDeviceGatewayConfigGatewayAuthMethod(original["gateway_auth_method"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedGatewayAuthMethod); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["gatewayAuthMethod"] = transformedGatewayAuthMethod - } - - transformedLastAccessedGatewayId, err := expandCloudIotDeviceGatewayConfigLastAccessedGatewayId(original["last_accessed_gateway_id"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLastAccessedGatewayId); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["lastAccessedGatewayId"] = transformedLastAccessedGatewayId - } - - transformedLastAccessedGatewayTime, err := expandCloudIotDeviceGatewayConfigLastAccessedGatewayTime(original["last_accessed_gateway_time"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLastAccessedGatewayTime); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["lastAccessedGatewayTime"] = transformedLastAccessedGatewayTime - } - - return transformed, nil -} - -func expandCloudIotDeviceGatewayConfigGatewayType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceGatewayConfigGatewayAuthMethod(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceGatewayConfigLastAccessedGatewayId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceGatewayConfigLastAccessedGatewayTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/cloudiot/resource_cloudiot_device_generated_test.go b/google/services/cloudiot/resource_cloudiot_device_generated_test.go deleted file mode 100644 index ebc026c3c18..00000000000 --- a/google/services/cloudiot/resource_cloudiot_device_generated_test.go +++ /dev/null @@ -1,170 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot_test - -import ( - "fmt" - "strings" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" - - "github.com/hashicorp/terraform-provider-google/google/acctest" - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func TestAccCloudIotDevice_cloudiotDeviceBasicExample(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - "random_suffix": acctest.RandString(t, 10), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDevice_cloudiotDeviceBasicExample(context), - }, - { - ResourceName: "google_cloudiot_device.test-device", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"registry"}, - }, - }, - }) -} - -func testAccCloudIotDevice_cloudiotDeviceBasicExample(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "registry" { - name = "tf-test-cloudiot-device-registry%{random_suffix}" -} - -resource "google_cloudiot_device" "test-device" { - name = "tf-test-cloudiot-device%{random_suffix}" - registry = google_cloudiot_registry.registry.id -} -`, context) -} - -func TestAccCloudIotDevice_cloudiotDeviceFullExample(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - "random_suffix": acctest.RandString(t, 10), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDevice_cloudiotDeviceFullExample(context), - }, - { - ResourceName: "google_cloudiot_device.test-device", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"registry"}, - }, - }, - }) -} - -func testAccCloudIotDevice_cloudiotDeviceFullExample(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "registry" { - name = "tf-test-cloudiot-device-registry%{random_suffix}" -} - -resource "google_cloudiot_device" "test-device" { - name = "tf-test-cloudiot-device%{random_suffix}" - registry = google_cloudiot_registry.registry.id - - credentials { - public_key { - format = "RSA_PEM" - key = file("test-fixtures/rsa_public.pem") - } - } - - blocked = false - - log_level = "INFO" - - metadata = { - test_key_1 = "test_value_1" - } - - gateway_config { - gateway_type = "NON_GATEWAY" - } -} -`, context) -} - -func testAccCheckCloudIotDeviceDestroyProducer(t *testing.T) func(s *terraform.State) error { - return func(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_cloudiot_device" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := acctest.GoogleProviderConfig(t) - - url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{CloudIotBasePath}}{{registry}}/devices/{{name}}") - if err != nil { - return err - } - - billingProject := "" - - if config.BillingProject != "" { - billingProject = config.BillingProject - } - - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: config.UserAgent, - }) - if err == nil { - return fmt.Errorf("CloudIotDevice still exists at %s", url) - } - } - - return nil - } -} diff --git a/google/services/cloudiot/resource_cloudiot_device_registry_id_test.go b/google/services/cloudiot/resource_cloudiot_device_registry_id_test.go deleted file mode 100644 index 710a65f3930..00000000000 --- a/google/services/cloudiot/resource_cloudiot_device_registry_id_test.go +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 -package cloudiot_test - -import ( - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/services/cloudiot" - "github.com/hashicorp/terraform-provider-google/google/verify" -) - -func TestValidateCloudIoTDeviceRegistryId(t *testing.T) { - x := []verify.StringValidationTestCase{ - // No errors - {TestName: "basic", Value: "foobar"}, - {TestName: "with numbers", Value: "foobar123"}, - {TestName: "short", Value: "foo"}, - {TestName: "long", Value: "foobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoobarfoo"}, - {TestName: "has a hyphen", Value: "foo-bar"}, - - // With errors - {TestName: "empty", Value: "", ExpectError: true}, - {TestName: "starts with a goog", Value: "googfoobar", ExpectError: true}, - {TestName: "starts with a number", Value: "1foobar", ExpectError: true}, - {TestName: "has an slash", Value: "foo/bar", ExpectError: true}, - {TestName: "has an backslash", Value: "foo\bar", ExpectError: true}, - {TestName: "too long", Value: strings.Repeat("f", 260), ExpectError: true}, - } - - es := verify.TestStringValidationCases(x, cloudiot.ValidateCloudIotDeviceRegistryID) - if len(es) > 0 { - t.Errorf("Failed to validate CloudIoT ID names: %v", es) - } -} diff --git a/google/services/cloudiot/resource_cloudiot_device_sweeper.go b/google/services/cloudiot/resource_cloudiot_device_sweeper.go deleted file mode 100644 index 21c95cd94f6..00000000000 --- a/google/services/cloudiot/resource_cloudiot_device_sweeper.go +++ /dev/null @@ -1,139 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot - -import ( - "context" - "log" - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/sweeper" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func init() { - sweeper.AddTestSweepers("CloudIotDevice", testSweepCloudIotDevice) -} - -// At the time of writing, the CI only passes us-central1 as the region -func testSweepCloudIotDevice(region string) error { - resourceName := "CloudIotDevice" - log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) - - config, err := sweeper.SharedConfigForRegion(region) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) - return err - } - - err = config.LoadAndValidate(context.Background()) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) - return err - } - - t := &testing.T{} - billingId := envvar.GetTestBillingAccountFromEnv(t) - - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://cloudiot.googleapis.com/v1/{{registry}}/devices", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["devices"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - if obj["name"] == nil { - log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) - return nil - } - - name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue - } - - deleteTemplate := "https://cloudiot.googleapis.com/v1/{{registry}}/devices/{{name}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name - - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: config.Project, - RawURL: deleteUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) - } - } - - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) - } - - return nil -} diff --git a/google/services/cloudiot/resource_cloudiot_device_update_test.go b/google/services/cloudiot/resource_cloudiot_device_update_test.go deleted file mode 100644 index 94126203393..00000000000 --- a/google/services/cloudiot/resource_cloudiot_device_update_test.go +++ /dev/null @@ -1,106 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 -package cloudiot_test - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-provider-google/google/acctest" -) - -func TestAccCloudIoTDevice_update(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("psregistry-test-%s", acctest.RandString(t, 10)) - deviceName := fmt.Sprintf("psdevice-test-%s", acctest.RandString(t, 10)) - resourceName := fmt.Sprintf("google_cloudiot_device.%s", deviceName) - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTDeviceBasic(deviceName, registryName), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIoTDeviceExtended(deviceName, registryName), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIoTDeviceBasic(deviceName, registryName), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func testAccCloudIoTDeviceBasic(deviceName string, registryName string) string { - return fmt.Sprintf(` - -resource "google_cloudiot_registry" "%s" { - name = "%s" -} - -resource "google_cloudiot_device" "%s" { - name = "%s" - registry = google_cloudiot_registry.%s.id - - gateway_config { - gateway_auth_method = "DEVICE_AUTH_TOKEN_ONLY" - gateway_type = "GATEWAY" - } -} - - -`, registryName, registryName, deviceName, deviceName, registryName) -} - -func testAccCloudIoTDeviceExtended(deviceName string, registryName string) string { - return fmt.Sprintf(` - -resource "google_cloudiot_registry" "%s" { - name = "%s" -} - -resource "google_cloudiot_device" "%s" { - name = "%s" - registry = google_cloudiot_registry.%s.id - - credentials { - public_key { - format = "RSA_PEM" - key = file("test-fixtures/rsa_public.pem") - } - } - - blocked = false - - log_level = "INFO" - - metadata = { - test_key_1 = "test_value_1" - } - - gateway_config { - gateway_auth_method = "ASSOCIATION_AND_DEVICE_AUTH_TOKEN" - gateway_type = "GATEWAY" - } -} -`, registryName, registryName, deviceName, deviceName, registryName) -} diff --git a/google/services/cloudiot/resource_cloudiot_registry.go b/google/services/cloudiot/resource_cloudiot_registry.go deleted file mode 100644 index 1e1af24b365..00000000000 --- a/google/services/cloudiot/resource_cloudiot_registry.go +++ /dev/null @@ -1,882 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot - -import ( - "fmt" - "log" - "reflect" - "regexp" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" - "github.com/hashicorp/terraform-provider-google/google/verify" -) - -func expandCloudIotDeviceRegistryHTTPConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedHTTPEnabledState, err := expandCloudIotDeviceRegistryHTTPEnabledState(original["http_enabled_state"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedHTTPEnabledState); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["httpEnabledState"] = transformedHTTPEnabledState - } - - return transformed, nil -} - -func expandCloudIotDeviceRegistryHTTPEnabledState(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryMqttConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedMqttEnabledState, err := expandCloudIotDeviceRegistryMqttEnabledState(original["mqtt_enabled_state"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedMqttEnabledState); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["mqttEnabledState"] = transformedMqttEnabledState - } - - return transformed, nil -} - -func expandCloudIotDeviceRegistryMqttEnabledState(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryStateNotificationConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPubsubTopicName, err := expandCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(original["pubsub_topic_name"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPubsubTopicName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["pubsubTopicName"] = transformedPubsubTopicName - } - - return transformed, nil -} - -func expandCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryCredentials(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPublicKeyCertificate, err := expandCloudIotDeviceRegistryCredentialsPublicKeyCertificate(original["public_key_certificate"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPublicKeyCertificate); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["publicKeyCertificate"] = transformedPublicKeyCertificate - } - - req = append(req, transformed) - } - - return req, nil -} - -func expandCloudIotDeviceRegistryCredentialsPublicKeyCertificate(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedFormat, err := expandCloudIotDeviceRegistryPublicKeyCertificateFormat(original["format"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedFormat); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["format"] = transformedFormat - } - - transformedCertificate, err := expandCloudIotDeviceRegistryPublicKeyCertificateCertificate(original["certificate"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedCertificate); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["certificate"] = transformedCertificate - } - - return transformed, nil -} - -func expandCloudIotDeviceRegistryPublicKeyCertificateFormat(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryPublicKeyCertificateCertificate(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func flattenCloudIotDeviceRegistryCredentials(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - log.Printf("[DEBUG] Flattening device resitry credentials: %q", d.Id()) - if v == nil { - log.Printf("[DEBUG] The credentials array is nil: %q", d.Id()) - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - log.Printf("[DEBUG] Original credential: %+v", original) - if len(original) < 1 { - log.Printf("[DEBUG] Excluding empty credential that the API returned. %q", d.Id()) - continue - } - log.Printf("[DEBUG] Credentials array before appending a new credential: %+v", transformed) - transformed = append(transformed, map[string]interface{}{ - "public_key_certificate": flattenCloudIotDeviceRegistryCredentialsPublicKeyCertificate(original["publicKeyCertificate"], d, config), - }) - log.Printf("[DEBUG] Credentials array after appending a new credential: %+v", transformed) - } - return transformed -} - -func flattenCloudIotDeviceRegistryCredentialsPublicKeyCertificate(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - log.Printf("[DEBUG] Flattening device resitry credentials public key certificate: %q", d.Id()) - if v == nil { - log.Printf("[DEBUG] The public key certificate is nil: %q", d.Id()) - return v - } - - original := v.(map[string]interface{}) - log.Printf("[DEBUG] Original public key certificate: %+v", original) - transformed := make(map[string]interface{}) - - transformedPublicKeyCertificateFormat := flattenCloudIotDeviceRegistryPublicKeyCertificateFormat(original["format"], d, config) - transformed["format"] = transformedPublicKeyCertificateFormat - - transformedPublicKeyCertificateCertificate := flattenCloudIotDeviceRegistryPublicKeyCertificateCertificate(original["certificate"], d, config) - transformed["certificate"] = transformedPublicKeyCertificateCertificate - - log.Printf("[DEBUG] Transformed public key certificate: %+v", transformed) - - return transformed -} - -func flattenCloudIotDeviceRegistryPublicKeyCertificateFormat(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryPublicKeyCertificateCertificate(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryHTTPConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedHTTPEnabledState := flattenCloudIotDeviceRegistryHTTPConfigHTTPEnabledState(original["httpEnabledState"], d, config) - transformed["http_enabled_state"] = transformedHTTPEnabledState - - return transformed -} - -func flattenCloudIotDeviceRegistryHTTPConfigHTTPEnabledState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryMqttConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedMqttEnabledState := flattenCloudIotDeviceRegistryMqttConfigMqttEnabledState(original["mqttEnabledState"], d, config) - transformed["mqtt_enabled_state"] = transformedMqttEnabledState - - return transformed -} - -func flattenCloudIotDeviceRegistryMqttConfigMqttEnabledState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryStateNotificationConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - log.Printf("[DEBUG] Flattening state notification config: %+v", v) - if v == nil { - return v - } - - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPubsubTopicName := flattenCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(original["pubsubTopicName"], d, config) - if val := reflect.ValueOf(transformedPubsubTopicName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - log.Printf("[DEBUG] pubsub topic name is not null: %v", d.Get("pubsub_topic_name")) - transformed["pubsub_topic_name"] = transformedPubsubTopicName - } - - return transformed -} - -func flattenCloudIotDeviceRegistryStateNotificationConfigPubsubTopicName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func ValidateCloudIotDeviceRegistryID(v interface{}, k string) (warnings []string, errors []error) { - value := v.(string) - if strings.HasPrefix(value, "goog") { - errors = append(errors, fmt.Errorf( - "%q (%q) can not start with \"goog\"", k, value)) - } - if !regexp.MustCompile(verify.CloudIoTIdRegex).MatchString(value) { - errors = append(errors, fmt.Errorf( - "%q (%q) doesn't match regexp %q", k, value, verify.CloudIoTIdRegex)) - } - return -} - -func validateCloudIotDeviceRegistrySubfolderMatch(v interface{}, k string) (warnings []string, errors []error) { - value := v.(string) - if strings.HasPrefix(value, "/") { - errors = append(errors, fmt.Errorf( - "%q (%q) can not start with '/'", k, value)) - } - return -} - -func ResourceCloudIotDeviceRegistry() *schema.Resource { - return &schema.Resource{ - Create: resourceCloudIotDeviceRegistryCreate, - Read: resourceCloudIotDeviceRegistryRead, - Update: resourceCloudIotDeviceRegistryUpdate, - Delete: resourceCloudIotDeviceRegistryDelete, - - Importer: &schema.ResourceImporter{ - State: resourceCloudIotDeviceRegistryImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - DeprecationMessage: "`google_cloudiot_registry` is deprecated in the API. This resource will be removed in the next major release of the provider.", - - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: ValidateCloudIotDeviceRegistryID, - Description: `A unique name for the resource, required by device registry.`, - }, - "event_notification_configs": { - Type: schema.TypeList, - Computed: true, - Optional: true, - Description: `List of configurations for event notifications, such as PubSub topics -to publish device events to.`, - MaxItems: 10, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "pubsub_topic_name": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, - Description: `PubSub topic name to publish device events.`, - }, - "subfolder_matches": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validateCloudIotDeviceRegistrySubfolderMatch, - Description: `If the subfolder name matches this string exactly, this -configuration will be used. The string must not include the -leading '/' character. If empty, all strings are matched. Empty -value can only be used for the last 'event_notification_configs' -item.`, - }, - }, - }, - }, - "log_level": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"NONE", "ERROR", "INFO", "DEBUG", ""}), - DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("NONE"), - Description: `The default logging verbosity for activity from devices in this -registry. Specifies which events should be written to logs. For -example, if the LogLevel is ERROR, only events that terminate in -errors will be logged. LogLevel is inclusive; enabling INFO logging -will also enable ERROR logging. Default value: "NONE" Possible values: ["NONE", "ERROR", "INFO", "DEBUG"]`, - Default: "NONE", - }, - "region": { - Type: schema.TypeString, - Computed: true, - Optional: true, - ForceNew: true, - Description: `The region in which the created registry should reside. -If it is not provided, the provider region is used.`, - }, - "state_notification_config": { - Type: schema.TypeMap, - Description: `A PubSub topic to publish device state updates.`, - Optional: true, - }, - "mqtt_config": { - Type: schema.TypeMap, - Description: `Activate or deactivate MQTT.`, - Computed: true, - Optional: true, - }, - "http_config": { - Type: schema.TypeMap, - Description: `Activate or deactivate HTTP.`, - Computed: true, - Optional: true, - }, - "credentials": { - Type: schema.TypeList, - Description: `List of public key certificates to authenticate devices.`, - Optional: true, - MaxItems: 10, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "public_key_certificate": { - Type: schema.TypeMap, - Description: `A public key certificate format and data.`, - Required: true, - }, - }, - }, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - }, - UseJSONNumber: true, - } -} - -func resourceCloudIotDeviceRegistryCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - idProp, err := expandCloudIotDeviceRegistryName(d.Get("name"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("name"); !tpgresource.IsEmptyValue(reflect.ValueOf(idProp)) && (ok || !reflect.DeepEqual(v, idProp)) { - obj["id"] = idProp - } - eventNotificationConfigsProp, err := expandCloudIotDeviceRegistryEventNotificationConfigs(d.Get("event_notification_configs"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("event_notification_configs"); !tpgresource.IsEmptyValue(reflect.ValueOf(eventNotificationConfigsProp)) && (ok || !reflect.DeepEqual(v, eventNotificationConfigsProp)) { - obj["eventNotificationConfigs"] = eventNotificationConfigsProp - } - logLevelProp, err := expandCloudIotDeviceRegistryLogLevel(d.Get("log_level"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("log_level"); !tpgresource.IsEmptyValue(reflect.ValueOf(logLevelProp)) && (ok || !reflect.DeepEqual(v, logLevelProp)) { - obj["logLevel"] = logLevelProp - } - - obj, err = resourceCloudIotDeviceRegistryEncoder(d, meta, obj) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}projects/{{project}}/locations/{{region}}/registries") - if err != nil { - return err - } - - log.Printf("[DEBUG] Creating new DeviceRegistry: %#v", obj) - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for DeviceRegistry: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return fmt.Errorf("Error creating DeviceRegistry: %s", err) - } - - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{region}}/registries/{{name}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Finished creating DeviceRegistry %q: %#v", d.Id(), res) - - return resourceCloudIotDeviceRegistryRead(d, meta) -} - -func resourceCloudIotDeviceRegistryRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}projects/{{project}}/locations/{{region}}/registries/{{name}}") - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for DeviceRegistry: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("CloudIotDeviceRegistry %q", d.Id())) - } - - res, err = resourceCloudIotDeviceRegistryDecoder(d, meta, res) - if err != nil { - return err - } - - if res == nil { - // Decoding the object has resulted in it being gone. It may be marked deleted - log.Printf("[DEBUG] Removing CloudIotDeviceRegistry because it no longer exists.") - d.SetId("") - return nil - } - - if err := d.Set("project", project); err != nil { - return fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - - region, err := tpgresource.GetRegion(d, config) - if err != nil { - return err - } - if err := d.Set("region", region); err != nil { - return fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - - if err := d.Set("name", flattenCloudIotDeviceRegistryName(res["id"], d, config)); err != nil { - return fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - if err := d.Set("event_notification_configs", flattenCloudIotDeviceRegistryEventNotificationConfigs(res["eventNotificationConfigs"], d, config)); err != nil { - return fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - if err := d.Set("log_level", flattenCloudIotDeviceRegistryLogLevel(res["logLevel"], d, config)); err != nil { - return fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - - return nil -} - -func resourceCloudIotDeviceRegistryUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for DeviceRegistry: %s", err) - } - billingProject = project - - obj := make(map[string]interface{}) - eventNotificationConfigsProp, err := expandCloudIotDeviceRegistryEventNotificationConfigs(d.Get("event_notification_configs"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("event_notification_configs"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, eventNotificationConfigsProp)) { - obj["eventNotificationConfigs"] = eventNotificationConfigsProp - } - logLevelProp, err := expandCloudIotDeviceRegistryLogLevel(d.Get("log_level"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("log_level"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, logLevelProp)) { - obj["logLevel"] = logLevelProp - } - - obj, err = resourceCloudIotDeviceRegistryEncoder(d, meta, obj) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}projects/{{project}}/locations/{{region}}/registries/{{name}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Updating DeviceRegistry %q: %#v", d.Id(), obj) - updateMask := []string{} - - if d.HasChange("event_notification_configs") { - updateMask = append(updateMask, "eventNotificationConfigs") - } - - if d.HasChange("log_level") { - updateMask = append(updateMask, "logLevel") - } - // updateMask is a URL parameter but not present in the schema, so ReplaceVars - // won't set it - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - log.Printf("[DEBUG] updateMask before adding extra schema entries %q: %v", d.Id(), updateMask) - - log.Printf("[DEBUG] Pre-update on state notification config: %q", d.Id()) - if d.HasChange("state_notification_config") { - log.Printf("[DEBUG] %q stateNotificationConfig.pubsubTopicName has a change. Adding it to the update mask", d.Id()) - updateMask = append(updateMask, "stateNotificationConfig.pubsubTopicName") - } - - log.Printf("[DEBUG] Pre-update on MQTT config: %q", d.Id()) - if d.HasChange("mqtt_config") { - log.Printf("[DEBUG] %q mqttConfig.mqttEnabledState has a change. Adding it to the update mask", d.Id()) - updateMask = append(updateMask, "mqttConfig.mqttEnabledState") - } - - log.Printf("[DEBUG] Pre-update on HTTP config: %q", d.Id()) - if d.HasChange("http_config") { - log.Printf("[DEBUG] %q httpConfig.httpEnabledState has a change. Adding it to the update mask", d.Id()) - updateMask = append(updateMask, "httpConfig.httpEnabledState") - } - - log.Printf("[DEBUG] Pre-update on credentials: %q", d.Id()) - if d.HasChange("credentials") { - log.Printf("[DEBUG] %q credentials has a change. Adding it to the update mask", d.Id()) - updateMask = append(updateMask, "credentials") - } - - log.Printf("[DEBUG] updateMask after adding extra schema entries %q: %v", d.Id(), updateMask) - - // Refreshing updateMask after adding extra schema entries - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - - log.Printf("[DEBUG] Update URL %q: %v", d.Id(), url) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutUpdate), - }) - - if err != nil { - return fmt.Errorf("Error updating DeviceRegistry %q: %s", d.Id(), err) - } else { - log.Printf("[DEBUG] Finished updating DeviceRegistry %q: %#v", d.Id(), res) - } - - return resourceCloudIotDeviceRegistryRead(d, meta) -} - -func resourceCloudIotDeviceRegistryDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for DeviceRegistry: %s", err) - } - billingProject = project - - url, err := tpgresource.ReplaceVars(d, config, "{{CloudIotBasePath}}projects/{{project}}/locations/{{region}}/registries/{{name}}") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting DeviceRegistry %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "DeviceRegistry") - } - - log.Printf("[DEBUG] Finished deleting DeviceRegistry %q: %#v", d.Id(), res) - return nil -} - -func resourceCloudIotDeviceRegistryImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/locations/(?P[^/]+)/registries/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{region}}/registries/{{name}}") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenCloudIotDeviceRegistryName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryEventNotificationConfigs(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "subfolder_matches": flattenCloudIotDeviceRegistryEventNotificationConfigsSubfolderMatches(original["subfolderMatches"], d, config), - "pubsub_topic_name": flattenCloudIotDeviceRegistryEventNotificationConfigsPubsubTopicName(original["pubsubTopicName"], d, config), - }) - } - return transformed -} -func flattenCloudIotDeviceRegistryEventNotificationConfigsSubfolderMatches(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryEventNotificationConfigsPubsubTopicName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudIotDeviceRegistryLogLevel(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandCloudIotDeviceRegistryName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryEventNotificationConfigs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedSubfolderMatches, err := expandCloudIotDeviceRegistryEventNotificationConfigsSubfolderMatches(original["subfolder_matches"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedSubfolderMatches); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["subfolderMatches"] = transformedSubfolderMatches - } - - transformedPubsubTopicName, err := expandCloudIotDeviceRegistryEventNotificationConfigsPubsubTopicName(original["pubsub_topic_name"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPubsubTopicName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["pubsubTopicName"] = transformedPubsubTopicName - } - - req = append(req, transformed) - } - return req, nil -} - -func expandCloudIotDeviceRegistryEventNotificationConfigsSubfolderMatches(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryEventNotificationConfigsPubsubTopicName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudIotDeviceRegistryLogLevel(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func resourceCloudIotDeviceRegistryEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { - config := meta.(*transport_tpg.Config) - - log.Printf("[DEBUG] Resource data before encoding extra schema entries %q: %#v", d.Id(), obj) - - log.Printf("[DEBUG] Encoding state notification config: %q", d.Id()) - stateNotificationConfigProp, err := expandCloudIotDeviceRegistryStateNotificationConfig(d.Get("state_notification_config"), d, config) - if err != nil { - return nil, err - } else if v, ok := d.GetOkExists("state_notification_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(stateNotificationConfigProp)) && (ok || !reflect.DeepEqual(v, stateNotificationConfigProp)) { - log.Printf("[DEBUG] Encoding %q. Setting stateNotificationConfig: %#v", d.Id(), stateNotificationConfigProp) - obj["stateNotificationConfig"] = stateNotificationConfigProp - } - - log.Printf("[DEBUG] Encoding HTTP config: %q", d.Id()) - httpConfigProp, err := expandCloudIotDeviceRegistryHTTPConfig(d.Get("http_config"), d, config) - if err != nil { - return nil, err - } else if v, ok := d.GetOkExists("http_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(httpConfigProp)) && (ok || !reflect.DeepEqual(v, httpConfigProp)) { - log.Printf("[DEBUG] Encoding %q. Setting httpConfig: %#v", d.Id(), httpConfigProp) - obj["httpConfig"] = httpConfigProp - } - - log.Printf("[DEBUG] Encoding MQTT config: %q", d.Id()) - mqttConfigProp, err := expandCloudIotDeviceRegistryMqttConfig(d.Get("mqtt_config"), d, config) - if err != nil { - return nil, err - } else if v, ok := d.GetOkExists("mqtt_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(mqttConfigProp)) && (ok || !reflect.DeepEqual(v, mqttConfigProp)) { - log.Printf("[DEBUG] Encoding %q. Setting mqttConfig: %#v", d.Id(), mqttConfigProp) - obj["mqttConfig"] = mqttConfigProp - } - - log.Printf("[DEBUG] Encoding credentials: %q", d.Id()) - credentialsProp, err := expandCloudIotDeviceRegistryCredentials(d.Get("credentials"), d, config) - if err != nil { - return nil, err - } else if v, ok := d.GetOkExists("credentials"); !tpgresource.IsEmptyValue(reflect.ValueOf(credentialsProp)) && (ok || !reflect.DeepEqual(v, credentialsProp)) { - log.Printf("[DEBUG] Encoding %q. Setting credentials: %#v", d.Id(), credentialsProp) - obj["credentials"] = credentialsProp - } - - log.Printf("[DEBUG] Resource data after encoding extra schema entries %q: %#v", d.Id(), obj) - - return obj, nil -} - -func resourceCloudIotDeviceRegistryDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { - config := meta.(*transport_tpg.Config) - - log.Printf("[DEBUG] Decoding state notification config: %q", d.Id()) - log.Printf("[DEBUG] State notification config before decoding: %v", d.Get("state_notification_config")) - if err := d.Set("state_notification_config", flattenCloudIotDeviceRegistryStateNotificationConfig(res["stateNotificationConfig"], d, config)); err != nil { - return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - log.Printf("[DEBUG] State notification config after decoding: %v", d.Get("state_notification_config")) - - log.Printf("[DEBUG] Decoding HTTP config: %q", d.Id()) - log.Printf("[DEBUG] HTTP config before decoding: %v", d.Get("http_config")) - if err := d.Set("http_config", flattenCloudIotDeviceRegistryHTTPConfig(res["httpConfig"], d, config)); err != nil { - return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - log.Printf("[DEBUG] HTTP config after decoding: %v", d.Get("http_config")) - - log.Printf("[DEBUG] Decoding MQTT config: %q", d.Id()) - log.Printf("[DEBUG] MQTT config before decoding: %v", d.Get("mqtt_config")) - if err := d.Set("mqtt_config", flattenCloudIotDeviceRegistryMqttConfig(res["mqttConfig"], d, config)); err != nil { - return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - log.Printf("[DEBUG] MQTT config after decoding: %v", d.Get("mqtt_config")) - - log.Printf("[DEBUG] Decoding credentials: %q", d.Id()) - log.Printf("[DEBUG] credentials before decoding: %v", d.Get("credentials")) - if err := d.Set("credentials", flattenCloudIotDeviceRegistryCredentials(res["credentials"], d, config)); err != nil { - return nil, fmt.Errorf("Error reading DeviceRegistry: %s", err) - } - log.Printf("[DEBUG] credentials after decoding: %v", d.Get("credentials")) - - return res, nil -} diff --git a/google/services/cloudiot/resource_cloudiot_registry_generated_test.go b/google/services/cloudiot/resource_cloudiot_registry_generated_test.go deleted file mode 100644 index 2ae7a530a68..00000000000 --- a/google/services/cloudiot/resource_cloudiot_registry_generated_test.go +++ /dev/null @@ -1,229 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot_test - -import ( - "fmt" - "strings" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" - - "github.com/hashicorp/terraform-provider-google/google/acctest" - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func TestAccCloudIotDeviceRegistry_cloudiotDeviceRegistryBasicExample(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - "random_suffix": acctest.RandString(t, 10), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceRegistryDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDeviceRegistry_cloudiotDeviceRegistryBasicExample(context), - }, - { - ResourceName: "google_cloudiot_registry.test-registry", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, - }, - }, - }) -} - -func testAccCloudIotDeviceRegistry_cloudiotDeviceRegistryBasicExample(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" -} -`, context) -} - -func TestAccCloudIotDeviceRegistry_cloudiotDeviceRegistrySingleEventNotificationConfigsExample(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - "random_suffix": acctest.RandString(t, 10), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceRegistryDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDeviceRegistry_cloudiotDeviceRegistrySingleEventNotificationConfigsExample(context), - }, - { - ResourceName: "google_cloudiot_registry.test-registry", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, - }, - }, - }) -} - -func testAccCloudIotDeviceRegistry_cloudiotDeviceRegistrySingleEventNotificationConfigsExample(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_pubsub_topic" "default-telemetry" { - name = "tf-test-default-telemetry%{random_suffix}" -} - -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - subfolder_matches = "" - } - -} -`, context) -} - -func TestAccCloudIotDeviceRegistry_cloudiotDeviceRegistryFullExample(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "region": envvar.GetTestRegionFromEnv(), - "random_suffix": acctest.RandString(t, 10), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceRegistryDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIotDeviceRegistry_cloudiotDeviceRegistryFullExample(context), - }, - { - ResourceName: "google_cloudiot_registry.test-registry", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, - }, - }, - }) -} - -func testAccCloudIotDeviceRegistry_cloudiotDeviceRegistryFullExample(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_pubsub_topic" "default-devicestatus" { - name = "tf-test-default-devicestatus%{random_suffix}" -} - -resource "google_pubsub_topic" "default-telemetry" { - name = "tf-test-default-telemetry%{random_suffix}" -} - -resource "google_pubsub_topic" "additional-telemetry" { - name = "tf-test-additional-telemetry%{random_suffix}" -} - -resource "google_cloudiot_registry" "test-registry" { - name = "tf-test-cloudiot-registry%{random_suffix}" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.additional-telemetry.id - subfolder_matches = "test/path%{random_suffix}" - } - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - subfolder_matches = "" - } - - state_notification_config = { - pubsub_topic_name = google_pubsub_topic.default-devicestatus.id - } - - mqtt_config = { - mqtt_enabled_state = "MQTT_ENABLED" - } - - http_config = { - http_enabled_state = "HTTP_ENABLED" - } - - log_level = "INFO" - - credentials { - public_key_certificate = { - format = "X509_CERTIFICATE_PEM" - certificate = file("test-fixtures/rsa_cert.pem") - } - } -} -`, context) -} - -func testAccCheckCloudIotDeviceRegistryDestroyProducer(t *testing.T) func(s *terraform.State) error { - return func(s *terraform.State) error { - for name, rs := range s.RootModule().Resources { - if rs.Type != "google_cloudiot_registry" { - continue - } - if strings.HasPrefix(name, "data.") { - continue - } - - config := acctest.GoogleProviderConfig(t) - - url, err := tpgresource.ReplaceVarsForTest(config, rs, "{{CloudIotBasePath}}projects/{{project}}/locations/{{region}}/registries/{{name}}") - if err != nil { - return err - } - - billingProject := "" - - if config.BillingProject != "" { - billingProject = config.BillingProject - } - - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: config.UserAgent, - }) - if err == nil { - return fmt.Errorf("CloudIotDeviceRegistry still exists at %s", url) - } - } - - return nil - } -} diff --git a/google/services/cloudiot/resource_cloudiot_registry_sweeper.go b/google/services/cloudiot/resource_cloudiot_registry_sweeper.go deleted file mode 100644 index 161314e0e6d..00000000000 --- a/google/services/cloudiot/resource_cloudiot_registry_sweeper.go +++ /dev/null @@ -1,139 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package cloudiot - -import ( - "context" - "log" - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/sweeper" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func init() { - sweeper.AddTestSweepers("CloudIotDeviceRegistry", testSweepCloudIotDeviceRegistry) -} - -// At the time of writing, the CI only passes us-central1 as the region -func testSweepCloudIotDeviceRegistry(region string) error { - resourceName := "CloudIotDeviceRegistry" - log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) - - config, err := sweeper.SharedConfigForRegion(region) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) - return err - } - - err = config.LoadAndValidate(context.Background()) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) - return err - } - - t := &testing.T{} - billingId := envvar.GetTestBillingAccountFromEnv(t) - - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://cloudiot.googleapis.com/v1/projects/{{project}}/locations/{{region}}/registries", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["deviceRegistries"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - if obj["name"] == nil { - log.Printf("[INFO][SWEEPER_LOG] %s resource name was nil", resourceName) - return nil - } - - name := tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue - } - - deleteTemplate := "https://cloudiot.googleapis.com/v1/projects/{{project}}/locations/{{region}}/registries/{{name}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name - - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: config.Project, - RawURL: deleteUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) - } - } - - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) - } - - return nil -} diff --git a/google/services/cloudiot/resource_cloudiot_registry_update_test.go b/google/services/cloudiot/resource_cloudiot_registry_update_test.go deleted file mode 100644 index baf391318e5..00000000000 --- a/google/services/cloudiot/resource_cloudiot_registry_update_test.go +++ /dev/null @@ -1,112 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 -package cloudiot_test - -import ( - "fmt" - "testing" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-provider-google/google/acctest" -) - -func TestAccCloudIoTRegistry_update(t *testing.T) { - t.Parallel() - - registryName := fmt.Sprintf("psregistry-test-%s", acctest.RandString(t, 10)) - resourceName := fmt.Sprintf("google_cloudiot_registry.%s", registryName) - deviceStatus := fmt.Sprintf("psregistry-test-devicestatus-%s", acctest.RandString(t, 10)) - defaultTelemetry := fmt.Sprintf("psregistry-test-telemetry-%s", acctest.RandString(t, 10)) - additionalTelemetry := fmt.Sprintf("psregistry-additional-test-telemetry-%s", acctest.RandString(t, 10)) - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckCloudIotDeviceRegistryDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccCloudIoTRegistryBasic(registryName), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIoTRegistryExtended(registryName, deviceStatus, defaultTelemetry, additionalTelemetry), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - { - Config: testAccCloudIoTRegistryBasic(registryName), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - }, - }, - }) -} - -func testAccCloudIoTRegistryBasic(registryName string) string { - return fmt.Sprintf(` - -resource "google_cloudiot_registry" "%s" { - name = "%s" -} -`, registryName, registryName) -} - -func testAccCloudIoTRegistryExtended(registryName string, deviceStatus string, defaultTelemetry string, additionalTelemetry string) string { - return fmt.Sprintf(` - -resource "google_pubsub_topic" "default-devicestatus" { - name = "psregistry-test-devicestatus-%s" -} - -resource "google_pubsub_topic" "default-telemetry" { - name = "psregistry-test-telemetry-%s" -} - -resource "google_pubsub_topic" "additional-telemetry" { - name = "psregistry-additional-test-telemetry-%s" -} - -resource "google_cloudiot_registry" "%s" { - name = "%s" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.additional-telemetry.id - subfolder_matches = "test/directory" - } - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - subfolder_matches = "" - } - - state_notification_config = { - pubsub_topic_name = google_pubsub_topic.default-devicestatus.id - } - - mqtt_config = { - mqtt_enabled_state = "MQTT_DISABLED" - } - - http_config = { - http_enabled_state = "HTTP_DISABLED" - } - - credentials { - public_key_certificate = { - format = "X509_CERTIFICATE_PEM" - certificate = file("test-fixtures/rsa_cert.pem") - } - } -} -`, deviceStatus, defaultTelemetry, additionalTelemetry, registryName, registryName) -} diff --git a/google/services/cloudiot/test-fixtures/rsa_cert.pem b/google/services/cloudiot/test-fixtures/rsa_cert.pem deleted file mode 100644 index d8a834633c9..00000000000 --- a/google/services/cloudiot/test-fixtures/rsa_cert.pem +++ /dev/null @@ -1,17 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICoDCCAYgCCQDzZ6R7RYs0sTANBgkqhkiG9w0BAQsFADARMQ8wDQYDVQQDDAZ1 -bnVzZWQwIBcNMTgwMTIwMTA0OTIzWhgPNDc1NTEyMTgxMDQ5MjNaMBExDzANBgNV -BAMMBnVudXNlZDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMXX/5jI -tvxpst1mFVKVXfyu5S5AOQF+i/ny6Ef+h8py8y42XfsE2AAPSTE3JCIgWemw7NQ/ -xnTQ3f6b7/6+ZsdM4/hoiedwYV8X3LVPB9NRnKe82OHUhzo1psVMJVvHtE3GsD/V -i40ki/L4Xs64E2GJqQfrkgeNfIyCeKev64fR5aMazqOw1cNrVe34mY3L1hgXpn7e -SnO0oqnV86pTh+jTT8EKgo9AI7/QuJbPWpJhnj1/Fm8i3DdCdpQqloX9Fc4f6whA -XlZ2tkma0PsBraxMua5GPglJ7m3RabQIoyAW+4hEYAcu7U0wIhCK+C8WTNgEYZaK -zvp8vK6vOgBIjE0CAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAvVXus7dLikEAM6+I -6xeN7aHEMJRR0h2rigLiYjfl8R9zG/zxUPUunWPAYaKvWOFviXcX/KqpjDqIIeWx -Gm0yNfyalHq476nRCf/0t9AH5X4Qy0KJSW5KfhQLG9X2z/UiJxwHKCwaWZtEEzPu -mGqvwhNXUOL/GuAZCJWPdWrUGM4kHHz3kw5v3UPNS2xA7yMtN9N1b8/pkTQ77XNk -DA4wngA5zc7Ae72SJDrY8XXqLfL4Nagkrn6AOhGK3/Ewvca6hkThMcHI0WF2AqFo -mo3iGUJzR5lOUx+4RiEBC5NNEZsE9GMNEiu8kYvCAS0FMKYmxFPGx1U/kiOeeuIw -W3sOEA== ------END CERTIFICATE----- diff --git a/google/services/cloudiot/test-fixtures/rsa_public.pem b/google/services/cloudiot/test-fixtures/rsa_public.pem deleted file mode 100644 index 2b2acadf676..00000000000 --- a/google/services/cloudiot/test-fixtures/rsa_public.pem +++ /dev/null @@ -1,14 +0,0 @@ ------BEGIN PUBLIC KEY----- -MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAv6weC1aT16l2qS6qdYcy -7BOjzP7TwT9zUAiFhWpL256GRqC8yQRdqMsi68Q//762IUyu/qaHbEgQ8WRmQdVV -GDlxkBfrA/iXB2dgujq8jh0HWIV2ev3TerV3aUwvYUlrowhq027SX9U1hbufdGCM -uKsHiF05ErgNvEuR8XAkeJ/YV2DV2+sRq+Wg9y4RwUYbdchdFty1d5SX/s0Yqswg -yOG9VoCdR7baF22ughVR44aRm+83mgtqAZ4M+Rpe7JGRsUGY/pR391Toi0s8En15 -JGiAhqX2W0Uo/FZZry3yuqRfdHYENB+ADuyTMTrUaKZv7eua0lTBz5oom3jSF3gv -I7SQoLdK/jhEVOOq41IjB8D60Sgd69bD7yTI516yvZ/s3AyKzW6f6KnjdbCcZKKT -0GAePNLNhDYfSlA9bwJ8HQS2FenSpSTArKvGiVrsinJuNjbQdPuQHcpWf9x1m3GR -TMvF+TNYM/lp7IL2VMbJRfWPy1iWxm9F1Yr6dkHVoLP7ocYkNRHoPLut5E6IFJtK -lVI2NneUYJGnYSO+1xPV9TqlJeMNwr3uFMAN8N/oB3f4WWwuRYgR0L5g2A+Lvx+g -bbdl+Tb/0CNfslfSuDrFV8Z4n6gVwb9ZPGlNHCvnqRfLUpRFJwmR7UYvzi/E7rXJ -EDkK+tcnPkz2JtjdLKR7qVcCAwEAAQ== ------END PUBLIC KEY----- diff --git a/google/services/cloudrun/data_source_cloud_run_service.go b/google/services/cloudrun/data_source_cloud_run_service.go index 9f89acf3b4e..ac5a2f2e328 100644 --- a/google/services/cloudrun/data_source_cloud_run_service.go +++ b/google/services/cloudrun/data_source_cloud_run_service.go @@ -30,5 +30,14 @@ func dataSourceGoogleCloudRunServiceRead(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceCloudRunServiceRead(d, meta) + err = resourceCloudRunServiceRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/cloudrun/resource_cloud_run_domain_mapping.go b/google/services/cloudrun/resource_cloud_run_domain_mapping.go index 2ee55348a38..dca141d207a 100644 --- a/google/services/cloudrun/resource_cloud_run_domain_mapping.go +++ b/google/services/cloudrun/resource_cloud_run_domain_mapping.go @@ -18,12 +18,14 @@ package cloudrun import ( + "context" "fmt" "log" "reflect" "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -31,26 +33,14 @@ import ( "github.com/hashicorp/terraform-provider-google/google/verify" ) -var domainMappingGoogleProvidedLabels = []string{ - "cloud.googleapis.com/location", - "run.googleapis.com/overrideAt", -} - -func DomainMappingLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { - // Suppress diffs for the labels provided by Google - for _, label := range domainMappingGoogleProvidedLabels { - if strings.Contains(k, label) && new == "" { - return true - } - } +func hasMetadata(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { + newCount := diff.Get("metadata.#") - // Let diff be determined by labels (above) - if strings.Contains(k, "labels.%") { - return true + if newCount.(int) < 1 { + return fmt.Errorf("Insufficient \"metadata\" blocks. 1 \"metadata\" block is required.") } - // For other keys, don't suppress diff. - return false + return nil } func ResourceCloudRunDomainMapping() *schema.Resource { @@ -68,6 +58,22 @@ func ResourceCloudRunDomainMapping() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + SchemaVersion: 1, + + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceCloudRunDomainMappingResourceV0().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceCloudRunDomainMappingUpgradeV0, + Version: 0, + }, + }, + CustomizeDiff: customdiff.All( + hasMetadata, + tpgresource.SetMetadataLabelsDiff, + tpgresource.SetMetadataAnnotationsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -75,10 +81,53 @@ func ResourceCloudRunDomainMapping() *schema.Resource { ForceNew: true, Description: `The location of the cloud run instance. eg us-central1`, }, - "metadata": { + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Name should be a [verified](https://support.google.com/webmasters/answer/9008080) domain`, + }, + "spec": { Type: schema.TypeList, Required: true, ForceNew: true, + Description: `The spec for this DomainMapping.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "route_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name of the Cloud Run Service that this DomainMapping applies to. +The route must exist.`, + }, + "certificate_mode": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"NONE", "AUTOMATIC", ""}), + Description: `The mode of the certificate. Default value: "AUTOMATIC" Possible values: ["NONE", "AUTOMATIC"]`, + Default: "AUTOMATIC", + }, + "force_override": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `If set, the mapping will override any mapping set before this spec was set. +It is recommended that the user leaves this empty to receive an error +warning about a potential conflict and only set it once the respective UI +has given such a warning.`, + }, + }, + }, + }, + "metadata": { + Type: schema.TypeList, + Computed: true, + Optional: true, + ForceNew: true, Description: `Metadata associated with this DomainMapping.`, MaxItems: 1, Elem: &schema.Resource{ @@ -91,11 +140,9 @@ func ResourceCloudRunDomainMapping() *schema.Resource { project ID or project number.`, }, "annotations": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - ForceNew: true, - DiffSuppressFunc: cloudrunAnnotationDiffSuppress, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, Description: `Annotations is a key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations @@ -106,17 +153,32 @@ or apply the lifecycle.ignore_changes rule to the metadata.0.annotations field.` Elem: &schema.Schema{Type: schema.TypeString}, }, "labels": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - ForceNew: true, - DiffSuppressFunc: DomainMappingLabelDiffSuppress, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, Description: `Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and routes. -More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels`, +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "generation": { Type: schema.TypeInt, Computed: true, @@ -139,6 +201,13 @@ https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-c Computed: true, Description: `SelfLink is a URL representing this object.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -150,48 +219,6 @@ More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/nam }, }, }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `Name should be a [verified](https://support.google.com/webmasters/answer/9008080) domain`, - }, - "spec": { - Type: schema.TypeList, - Required: true, - ForceNew: true, - Description: `The spec for this DomainMapping.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "route_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, - Description: `The name of the Cloud Run Service that this DomainMapping applies to. -The route must exist.`, - }, - "certificate_mode": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"NONE", "AUTOMATIC", ""}), - Description: `The mode of the certificate. Default value: "AUTOMATIC" Possible values: ["NONE", "AUTOMATIC"]`, - Default: "AUTOMATIC", - }, - "force_override": { - Type: schema.TypeBool, - Optional: true, - ForceNew: true, - Description: `If set, the mapping will override any mapping set before this spec was set. -It is recommended that the user leaves this empty to receive an error -warning about a potential conflict and only set it once the respective UI -has given such a warning.`, - }, - }, - }, - }, "status": { Type: schema.TypeList, Computed: true, @@ -523,9 +550,9 @@ func resourceCloudRunDomainMappingDelete(d *schema.ResourceData, meta interface{ func resourceCloudRunDomainMappingImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "locations/(?P[^/]+)/namespaces/(?P[^/]+)/domainmappings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^locations/(?P[^/]+)/namespaces/(?P[^/]+)/domainmappings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -702,10 +729,27 @@ func flattenCloudRunDomainMappingMetadata(v interface{}, d *schema.ResourceData, flattenCloudRunDomainMappingMetadataNamespace(original["namespace"], d, config) transformed["annotations"] = flattenCloudRunDomainMappingMetadataAnnotations(original["annotations"], d, config) + transformed["terraform_labels"] = + flattenCloudRunDomainMappingMetadataTerraformLabels(original["labels"], d, config) + transformed["effective_labels"] = + flattenCloudRunDomainMappingMetadataEffectiveLabels(original["labels"], d, config) + transformed["effective_annotations"] = + flattenCloudRunDomainMappingMetadataEffectiveAnnotations(original["annotations"], d, config) return []interface{}{transformed} } func flattenCloudRunDomainMappingMetadataLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("metadata.0.labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudRunDomainMappingMetadataGeneration(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -742,6 +786,40 @@ func flattenCloudRunDomainMappingMetadataNamespace(v interface{}, d *schema.Reso } func flattenCloudRunDomainMappingMetadataAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("metadata.0.annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCloudRunDomainMappingMetadataTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("metadata.0.terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCloudRunDomainMappingMetadataEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenCloudRunDomainMappingMetadataEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -799,13 +877,6 @@ func expandCloudRunDomainMappingMetadata(v interface{}, d tpgresource.TerraformR original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedLabels, err := expandCloudRunDomainMappingMetadataLabels(original["labels"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["labels"] = transformedLabels - } - transformedGeneration, err := expandCloudRunDomainMappingMetadataGeneration(original["generation"], d, config) if err != nil { return nil, err @@ -841,25 +912,21 @@ func expandCloudRunDomainMappingMetadata(v interface{}, d tpgresource.TerraformR transformed["namespace"] = transformedNamespace } - transformedAnnotations, err := expandCloudRunDomainMappingMetadataAnnotations(original["annotations"], d, config) + transformedEffectiveLabels, err := expandCloudRunDomainMappingMetadataEffectiveLabels(original["effective_labels"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedAnnotations); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["annotations"] = transformedAnnotations + } else if val := reflect.ValueOf(transformedEffectiveLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["labels"] = transformedEffectiveLabels } - return transformed, nil -} - -func expandCloudRunDomainMappingMetadataLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + transformedEffectiveAnnotations, err := expandCloudRunDomainMappingMetadataEffectiveAnnotations(original["effective_annotations"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEffectiveAnnotations); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["annotations"] = transformedEffectiveAnnotations } - return m, nil + + return transformed, nil } func expandCloudRunDomainMappingMetadataGeneration(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -882,7 +949,18 @@ func expandCloudRunDomainMappingMetadataNamespace(v interface{}, d tpgresource.T return v, nil } -func expandCloudRunDomainMappingMetadataAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandCloudRunDomainMappingMetadataEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + +func expandCloudRunDomainMappingMetadataEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -915,3 +993,280 @@ func resourceCloudRunDomainMappingDecoder(d *schema.ResourceData, meta interface } return res, nil } + +var domainMappingGoogleProvidedLocationLabel = "cloud.googleapis.com/location" +var domainMappingGoogleProvidedOverrideLabel = "run.googleapis.com/overrideAt" + +var domainMappingGoogleProvidedLabels = []string{ + domainMappingGoogleProvidedLocationLabel, + domainMappingGoogleProvidedOverrideLabel, +} + +func DomainMappingLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + // Suppress diffs for the labels provided by Google + for _, label := range domainMappingGoogleProvidedLabels { + if strings.Contains(k, label) && new == "" { + return true + } + } + + // Let diff be determined by labels (above) + if strings.Contains(k, "labels.%") { + return true + } + + // For other keys, don't suppress diff. + return false +} + +func resourceCloudRunDomainMappingResourceV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The location of the cloud run instance. eg us-central1`, + }, + "metadata": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + Description: `Metadata associated with this DomainMapping.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "namespace": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `In Cloud Run the namespace must be equal to either the +project ID or project number.`, + }, + "annotations": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + ForceNew: true, + DiffSuppressFunc: cloudrunAnnotationDiffSuppress, + Description: `Annotations is a key value map stored with a resource that +may be set by external tools to store and retrieve arbitrary metadata. More +info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations + +**Note**: The Cloud Run API may add additional annotations that were not provided in your config. +If terraform plan shows a diff where a server-side annotation is added, you can add it to your config +or apply the lifecycle.ignore_changes rule to the metadata.0.annotations field.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "labels": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + ForceNew: true, + DiffSuppressFunc: DomainMappingLabelDiffSuppress, + Description: `Map of string keys and values that can be used to organize and categorize +(scope and select) objects. May match selectors of replication controllers +and routes. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "generation": { + Type: schema.TypeInt, + Computed: true, + Description: `A sequence number representing a specific generation of the desired state.`, + }, + "resource_version": { + Type: schema.TypeString, + Computed: true, + Description: `An opaque value that represents the internal version of this object that +can be used by clients to determine when objects have changed. May be used +for optimistic concurrency, change detection, and the watch operation on a +resource or set of resources. They may only be valid for a +particular resource or set of resources. + +More info: +https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency`, + }, + "self_link": { + Type: schema.TypeString, + Computed: true, + Description: `SelfLink is a URL representing this object.`, + }, + "uid": { + Type: schema.TypeString, + Computed: true, + Description: `UID is a unique id generated by the server on successful creation of a resource and is not +allowed to change on PUT operations. + +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids`, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Name should be a [verified](https://support.google.com/webmasters/answer/9008080) domain`, + }, + "spec": { + Type: schema.TypeList, + Required: true, + ForceNew: true, + Description: `The spec for this DomainMapping.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "route_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name of the Cloud Run Service that this DomainMapping applies to. +The route must exist.`, + }, + "certificate_mode": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"NONE", "AUTOMATIC", ""}), + Description: `The mode of the certificate. Default value: "AUTOMATIC" Possible values: ["NONE", "AUTOMATIC"]`, + Default: "AUTOMATIC", + }, + "force_override": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `If set, the mapping will override any mapping set before this spec was set. +It is recommended that the user leaves this empty to receive an error +warning about a potential conflict and only set it once the respective UI +has given such a warning.`, + }, + }, + }, + }, + "status": { + Type: schema.TypeList, + Computed: true, + Description: `The current status of the DomainMapping.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_records": { + Type: schema.TypeList, + Optional: true, + Description: `The resource records required to configure this domain mapping. These +records must be added to the domain's DNS configuration in order to +serve the application via this domain mapping.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidateEnum([]string{"A", "AAAA", "CNAME", ""}), + Description: `Resource record type. Example: 'AAAA'. Possible values: ["A", "AAAA", "CNAME"]`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Description: `Relative name of the object affected by this record. Only applicable for +'CNAME' records. Example: 'www'.`, + }, + "rrdata": { + Type: schema.TypeString, + Computed: true, + Description: `Data for this record. Values vary by record type, as defined in RFC 1035 +(section 5) and RFC 1034 (section 3.6.1).`, + }, + }, + }, + }, + "conditions": { + Type: schema.TypeList, + Computed: true, + Description: `Array of observed DomainMappingConditions, indicating the current state +of the DomainMapping.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "message": { + Type: schema.TypeString, + Computed: true, + Description: `Human readable message indicating details about the current status.`, + }, + "reason": { + Type: schema.TypeString, + Computed: true, + Description: `One-word CamelCase reason for the condition's current status.`, + }, + "status": { + Type: schema.TypeString, + Computed: true, + Description: `Status of the condition, one of True, False, Unknown.`, + }, + "type": { + Type: schema.TypeString, + Computed: true, + Description: `Type of domain mapping condition.`, + }, + }, + }, + }, + "mapped_route_name": { + Type: schema.TypeString, + Computed: true, + Description: `The name of the route that the mapping currently points to.`, + }, + "observed_generation": { + Type: schema.TypeInt, + Computed: true, + Description: `ObservedGeneration is the 'Generation' of the DomainMapping that +was last processed by the controller.`, + }, + }, + }, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func ResourceCloudRunDomainMappingUpgradeV0(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + log.Printf("[DEBUG] Attributes before migration: %#v", rawState) + + if rawState["metadata"] != nil { + rawMetadatas := rawState["metadata"].([]interface{}) + if len(rawMetadatas) > 0 && rawMetadatas[0] != nil { + // Upgrade labels fields + rawMetadata := rawMetadatas[0].(map[string]interface{}) + + rawLabels := rawMetadata["labels"] + if rawLabels != nil { + labels := make(map[string]interface{}) + effectiveLabels := make(map[string]interface{}) + + for k, v := range rawLabels.(map[string]interface{}) { + effectiveLabels[k] = v + + if !strings.Contains(k, domainMappingGoogleProvidedLocationLabel) && !strings.Contains(k, domainMappingGoogleProvidedOverrideLabel) { + labels[k] = v + } + } + + rawMetadata["labels"] = labels + rawMetadata["effective_labels"] = effectiveLabels + } + + upgradeAnnotations(rawMetadata) + + rawState["metadata"] = []interface{}{rawMetadata} + } + } + + log.Printf("[DEBUG] Attributes after migration: %#v", rawState) + return rawState, nil +} diff --git a/google/services/cloudrun/resource_cloud_run_domain_mapping_generated_test.go b/google/services/cloudrun/resource_cloud_run_domain_mapping_generated_test.go index 604192055ca..d1ed60d7b27 100644 --- a/google/services/cloudrun/resource_cloud_run_domain_mapping_generated_test.go +++ b/google/services/cloudrun/resource_cloud_run_domain_mapping_generated_test.go @@ -51,7 +51,7 @@ func TestAccCloudRunDomainMapping_cloudRunDomainMappingBasicExample(t *testing.T ResourceName: "google_cloud_run_domain_mapping.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) diff --git a/google/services/cloudrun/resource_cloud_run_domain_mapping_test.go b/google/services/cloudrun/resource_cloud_run_domain_mapping_test.go index db6e5c5bd89..637d3824bc5 100644 --- a/google/services/cloudrun/resource_cloud_run_domain_mapping_test.go +++ b/google/services/cloudrun/resource_cloud_run_domain_mapping_test.go @@ -31,7 +31,7 @@ func TestAccCloudRunDomainMapping_foregroundDeletion(t *testing.T) { ResourceName: "google_cloud_run_domain_mapping.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "status", "metadata.0.resource_version"}, + ImportStateVerifyIgnore: []string{"name", "location", "status", "metadata.0.labels", "metadata.0.terraform_labels", "metadata.0.resource_version"}, }, { Config: testAccCloudRunDomainMapping_cloudRunDomainMappingUpdated2(context), @@ -40,7 +40,7 @@ func TestAccCloudRunDomainMapping_foregroundDeletion(t *testing.T) { ResourceName: "google_cloud_run_domain_mapping.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "status", "metadata.0.resource_version"}, + ImportStateVerifyIgnore: []string{"name", "location", "status", "metadata.0.labels", "metadata.0.terraform_labels", "metadata.0.resource_version"}, }, }, }) diff --git a/google/services/cloudrun/resource_cloud_run_service.go b/google/services/cloudrun/resource_cloud_run_service.go index 741442f6223..930b8f6ed2e 100644 --- a/google/services/cloudrun/resource_cloud_run_service.go +++ b/google/services/cloudrun/resource_cloud_run_service.go @@ -44,27 +44,6 @@ func revisionNameCustomizeDiff(_ context.Context, diff *schema.ResourceDiff, v i return nil } -var cloudRunGoogleProvidedAnnotations = regexp.MustCompile(`serving\.knative\.dev/(?:(?:creator)|(?:lastModifier))$|run\.googleapis\.com/(?:(?:ingress-status)|(?:operation-id))$|cloud\.googleapis\.com/(?:(?:location))`) - -func cloudrunAnnotationDiffSuppress(k, old, new string, d *schema.ResourceData) bool { - // Suppress diffs for the annotations provided by Google - if cloudRunGoogleProvidedAnnotations.MatchString(k) && new == "" { - return true - } - - if strings.HasSuffix(k, "run.googleapis.com/ingress") { - return old == "all" && new == "" - } - - // Let diff be determined by annotations (above) - if strings.Contains(k, "annotations.%") { - return true - } - - // For other keys, don't suppress diff. - return false -} - var cloudRunGoogleProvidedTemplateAnnotations = regexp.MustCompile(`template\.0\.metadata\.0\.annotations\.run\.googleapis\.com/sandbox`) var cloudRunGoogleProvidedTemplateAnnotations_autoscaling_maxscale = regexp.MustCompile(`template\.0\.metadata\.0\.annotations\.autoscaling\.knative\.dev/maxScale`) @@ -83,23 +62,6 @@ func cloudrunTemplateAnnotationDiffSuppress(k, old, new string, d *schema.Resour return false } -var cloudRunGoogleProvidedLabels = regexp.MustCompile(`cloud\.googleapis\.com/(?:(?:location))`) - -func cloudrunLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { - // Suppress diffs for the labels provided by Google - if cloudRunGoogleProvidedLabels.MatchString(k) && new == "" { - return true - } - - // Let diff be determined by labels (above) - if strings.Contains(k, "labels.%") { - return true - } - - // For other keys, don't suppress diff. - return false -} - var cloudRunGoogleProvidedTemplateLabels = []string{ "run.googleapis.com/startupProbeType", } @@ -138,9 +100,20 @@ func ResourceCloudRunService() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, - SchemaVersion: 1, + SchemaVersion: 2, + + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceCloudRunServiceResourceV1().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceCloudRunServiceUpgradeV1, + Version: 1, + }, + }, CustomizeDiff: customdiff.All( revisionNameCustomizeDiff, + tpgresource.SetMetadataLabelsDiff, + tpgresource.SetMetadataAnnotationsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -924,10 +897,8 @@ and annotations.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "annotations": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - DiffSuppressFunc: cloudrunAnnotationDiffSuppress, + Type: schema.TypeMap, + Optional: true, Description: `Annotations is a key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations @@ -952,13 +923,14 @@ keys to configure features on a Service: Elem: &schema.Schema{Type: schema.TypeString}, }, "labels": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - DiffSuppressFunc: cloudrunLabelDiffSuppress, + Type: schema.TypeMap, + Optional: true, Description: `Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers -and routes.`, +and routes. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "namespace": { @@ -968,6 +940,18 @@ and routes.`, Description: `In Cloud Run the namespace must be equal to either the project ID or project number.`, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "generation": { Type: schema.TypeInt, Computed: true, @@ -987,6 +971,13 @@ particular resource or set of resources.`, Computed: true, Description: `SelfLink is a URL representing this object.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -1507,9 +1498,9 @@ func resourceCloudRunServiceDelete(d *schema.ResourceData, meta interface{}) err func resourceCloudRunServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "locations/(?P[^/]+)/namespaces/(?P[^/]+)/services/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^locations/(?P[^/]+)/namespaces/(?P[^/]+)/services/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -2729,10 +2720,27 @@ func flattenCloudRunServiceMetadata(v interface{}, d *schema.ResourceData, confi flattenCloudRunServiceMetadataNamespace(original["namespace"], d, config) transformed["annotations"] = flattenCloudRunServiceMetadataAnnotations(original["annotations"], d, config) + transformed["terraform_labels"] = + flattenCloudRunServiceMetadataTerraformLabels(original["labels"], d, config) + transformed["effective_labels"] = + flattenCloudRunServiceMetadataEffectiveLabels(original["labels"], d, config) + transformed["effective_annotations"] = + flattenCloudRunServiceMetadataEffectiveAnnotations(original["annotations"], d, config) return []interface{}{transformed} } func flattenCloudRunServiceMetadataLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("metadata.0.labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudRunServiceMetadataGeneration(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -2769,6 +2777,40 @@ func flattenCloudRunServiceMetadataNamespace(v interface{}, d *schema.ResourceDa } func flattenCloudRunServiceMetadataAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("metadata.0.annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCloudRunServiceMetadataTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("metadata.0.terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenCloudRunServiceMetadataEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenCloudRunServiceMetadataEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -4127,13 +4169,6 @@ func expandCloudRunServiceMetadata(v interface{}, d tpgresource.TerraformResourc original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedLabels, err := expandCloudRunServiceMetadataLabels(original["labels"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["labels"] = transformedLabels - } - transformedGeneration, err := expandCloudRunServiceMetadataGeneration(original["generation"], d, config) if err != nil { return nil, err @@ -4169,25 +4204,21 @@ func expandCloudRunServiceMetadata(v interface{}, d tpgresource.TerraformResourc transformed["namespace"] = transformedNamespace } - transformedAnnotations, err := expandCloudRunServiceMetadataAnnotations(original["annotations"], d, config) + transformedEffectiveLabels, err := expandCloudRunServiceMetadataEffectiveLabels(original["effective_labels"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedAnnotations); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["annotations"] = transformedAnnotations + } else if val := reflect.ValueOf(transformedEffectiveLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["labels"] = transformedEffectiveLabels } - return transformed, nil -} - -func expandCloudRunServiceMetadataLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + transformedEffectiveAnnotations, err := expandCloudRunServiceMetadataEffectiveAnnotations(original["effective_annotations"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedEffectiveAnnotations); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["annotations"] = transformedEffectiveAnnotations } - return m, nil + + return transformed, nil } func expandCloudRunServiceMetadataGeneration(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -4217,7 +4248,18 @@ func expandCloudRunServiceMetadataNamespace(v interface{}, d tpgresource.Terrafo return v, nil } -func expandCloudRunServiceMetadataAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandCloudRunServiceMetadataEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + +func expandCloudRunServiceMetadataEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -4253,3 +4295,1081 @@ func resourceCloudRunServiceDecoder(d *schema.ResourceData, meta interface{}, re } return res, nil } + +var cloudRunGoogleProvidedAnnotations = regexp.MustCompile(`serving\.knative\.dev/(?:(?:creator)|(?:lastModifier))$|run\.googleapis\.com/(?:(?:ingress-status)|(?:operation-id))$|cloud\.googleapis\.com/(?:(?:location))`) + +func cloudrunAnnotationDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + // Suppress diffs for the annotations provided by Google + if cloudRunGoogleProvidedAnnotations.MatchString(k) && new == "" { + return true + } + + if strings.HasSuffix(k, "run.googleapis.com/ingress") { + return old == "all" && new == "" + } + + // Let diff be determined by annotations (above) + if strings.Contains(k, "annotations.%") { + return true + } + + // For other keys, don't suppress diff. + return false +} + +var cloudRunGoogleProvidedLabels = regexp.MustCompile(`cloud\.googleapis\.com/(?:(?:location))`) + +func cloudrunLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + // Suppress diffs for the labels provided by Google + if cloudRunGoogleProvidedLabels.MatchString(k) && new == "" { + return true + } + + // Let diff be determined by labels (above) + if strings.Contains(k, "labels.%") { + return true + } + + // For other keys, don't suppress diff. + return false +} + +func resourceCloudRunServiceResourceV1() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The location of the cloud run instance. eg us-central1`, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Name must be unique within a Google Cloud project and region. +Is required when creating resources. Name is primarily intended +for creation idempotence and configuration definition. Cannot be updated. +More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names`, + }, + "template": { + Type: schema.TypeList, + Optional: true, + Description: `template holds the latest specification for the Revision to +be stamped out. The template references the container image, and may also +include labels and annotations that should be attached to the Revision. +To correlate a Revision, and/or to force a Revision to be created when the +spec doesn't otherwise change, a nonce label may be provided in the +template metadata. For more details, see: +https://github.com/knative/serving/blob/main/docs/client-conventions.md#associate-modifications-with-revisions + +Cloud Run does not currently support referencing a build that is +responsible for materializing the container image from source.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "spec": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `RevisionSpec holds the desired state of the Revision (from the client).`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "containers": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `Containers defines the unit of execution for this Revision.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "image": { + Type: schema.TypeString, + Required: true, + Description: `Docker image name. This is most often a reference to a container located +in the container registry, such as gcr.io/cloudrun/hello`, + }, + "args": { + Type: schema.TypeList, + Optional: true, + Description: `Arguments to the entrypoint. +The docker image's CMD is used if this is not provided.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "command": { + Type: schema.TypeList, + Optional: true, + Description: `Entrypoint array. Not executed within a shell. +The docker image's ENTRYPOINT is used if this is not provided.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "env": { + Type: schema.TypeSet, + Optional: true, + Description: `List of environment variables to set in the container.`, + Elem: cloudrunServiceSpecTemplateSpecContainersContainersEnvSchema(), + // Default schema.HashSchema is used. + }, + "env_from": { + Type: schema.TypeList, + Optional: true, + Deprecated: "`env_from` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API.", + ForceNew: true, + Description: `List of sources to populate environment variables in the container. +All invalid keys will be reported as an event when the container is starting. +When a key exists in multiple sources, the value associated with the last source will +take precedence. Values defined by an Env with a duplicate key will take +precedence.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "config_map_ref": { + Type: schema.TypeList, + Optional: true, + Description: `The ConfigMap to select from.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "local_object_reference": { + Type: schema.TypeList, + Optional: true, + Description: `The ConfigMap to select from.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: `Name of the referent.`, + }, + }, + }, + }, + "optional": { + Type: schema.TypeBool, + Optional: true, + Description: `Specify whether the ConfigMap must be defined`, + }, + }, + }, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + Description: `An optional identifier to prepend to each key in the ConfigMap.`, + }, + "secret_ref": { + Type: schema.TypeList, + Optional: true, + Description: `The Secret to select from.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "local_object_reference": { + Type: schema.TypeList, + Optional: true, + Description: `The Secret to select from.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: `Name of the referent.`, + }, + }, + }, + }, + "optional": { + Type: schema.TypeBool, + Optional: true, + Description: `Specify whether the Secret must be defined`, + }, + }, + }, + }, + }, + }, + }, + "liveness_probe": { + Type: schema.TypeList, + Optional: true, + Description: `Periodic probe of container liveness. Container will be restarted if the probe fails.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "failure_threshold": { + Type: schema.TypeInt, + Optional: true, + Description: `Minimum consecutive failures for the probe to be considered failed after +having succeeded. Defaults to 3. Minimum value is 1.`, + Default: 3, + }, + "grpc": { + Type: schema.TypeList, + Optional: true, + Description: `GRPC specifies an action involving a GRPC port.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "port": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `Port number to access on the container. Number must be in the range 1 to 65535. +If not specified, defaults to the same value as container.ports[0].containerPort.`, + }, + "service": { + Type: schema.TypeString, + Optional: true, + Description: `The name of the service to place in the gRPC HealthCheckRequest +(see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). +If this is not specified, the default behavior is defined by gRPC.`, + }, + }, + }, + ExactlyOneOf: []string{}, + }, + "http_get": { + Type: schema.TypeList, + Optional: true, + Description: `HttpGet specifies the http request to perform.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "http_headers": { + Type: schema.TypeList, + Optional: true, + Description: `Custom headers to set in the request. HTTP allows repeated headers.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: `The header field name.`, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Description: `The header field value.`, + Default: "", + }, + }, + }, + }, + "path": { + Type: schema.TypeString, + Optional: true, + Description: `Path to access on the HTTP server. If set, it should not be empty string.`, + Default: "/", + }, + "port": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `Port number to access on the container. Number must be in the range 1 to 65535. +If not specified, defaults to the same value as container.ports[0].containerPort.`, + }, + }, + }, + ExactlyOneOf: []string{}, + }, + "initial_delay_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `Number of seconds after the container has started before the probe is +initiated. +Defaults to 0 seconds. Minimum value is 0. Maximum value is 3600.`, + Default: 0, + }, + "period_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `How often (in seconds) to perform the probe. +Default to 10 seconds. Minimum value is 1. Maximum value is 3600.`, + Default: 10, + }, + "timeout_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `Number of seconds after which the probe times out. +Defaults to 1 second. Minimum value is 1. Maximum value is 3600. +Must be smaller than period_seconds.`, + Default: 1, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `Name of the container`, + }, + "ports": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `List of open ports in the container.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "container_port": { + Type: schema.TypeInt, + Optional: true, + Description: `Port number the container listens on. This must be a valid port number (between 1 and 65535). Defaults to "8080".`, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `If specified, used to specify which protocol to use. Allowed values are "http1" (HTTP/1) and "h2c" (HTTP/2 end-to-end). Defaults to "http1".`, + }, + "protocol": { + Type: schema.TypeString, + Optional: true, + Description: `Protocol for port. Must be "TCP". Defaults to "TCP".`, + }, + }, + }, + }, + "resources": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `Compute Resources required by this container. Used to set values such as max memory`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "limits": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + Description: `Limits describes the maximum amount of compute resources allowed. +The values of the map is string form of the 'quantity' k8s type: +https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "requests": { + Type: schema.TypeMap, + Optional: true, + Description: `Requests describes the minimum amount of compute resources required. +If Requests is omitted for a container, it defaults to Limits if that is +explicitly specified, otherwise to an implementation-defined value. +The values of the map is string form of the 'quantity' k8s type: +https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "startup_probe": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `Startup probe of application within the container. +All other probes are disabled if a startup probe is provided, until it +succeeds. Container will not be added to service endpoints if the probe fails.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "failure_threshold": { + Type: schema.TypeInt, + Optional: true, + Description: `Minimum consecutive failures for the probe to be considered failed after +having succeeded. Defaults to 3. Minimum value is 1.`, + Default: 3, + }, + "grpc": { + Type: schema.TypeList, + Optional: true, + Description: `GRPC specifies an action involving a GRPC port.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "port": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `Port number to access on the container. Number must be in the range 1 to 65535. +If not specified, defaults to the same value as container.ports[0].containerPort.`, + }, + "service": { + Type: schema.TypeString, + Optional: true, + Description: `The name of the service to place in the gRPC HealthCheckRequest +(see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). +If this is not specified, the default behavior is defined by gRPC.`, + }, + }, + }, + ExactlyOneOf: []string{}, + }, + "http_get": { + Type: schema.TypeList, + Optional: true, + Description: `HttpGet specifies the http request to perform.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "http_headers": { + Type: schema.TypeList, + Optional: true, + Description: `Custom headers to set in the request. HTTP allows repeated headers.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: `The header field name.`, + }, + "value": { + Type: schema.TypeString, + Optional: true, + Description: `The header field value.`, + Default: "", + }, + }, + }, + }, + "path": { + Type: schema.TypeString, + Optional: true, + Description: `Path to access on the HTTP server. If set, it should not be empty string.`, + Default: "/", + }, + "port": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `Port number to access on the container. Number must be in the range 1 to 65535. +If not specified, defaults to the same value as container.ports[0].containerPort.`, + }, + }, + }, + ExactlyOneOf: []string{}, + }, + "initial_delay_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `Number of seconds after the container has started before the probe is +initiated. +Defaults to 0 seconds. Minimum value is 0. Maximum value is 240.`, + Default: 0, + }, + "period_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `How often (in seconds) to perform the probe. +Default to 10 seconds. Minimum value is 1. Maximum value is 240.`, + Default: 10, + }, + "tcp_socket": { + Type: schema.TypeList, + Optional: true, + Description: `TcpSocket specifies an action involving a TCP port.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "port": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `Port number to access on the container. Number must be in the range 1 to 65535. +If not specified, defaults to the same value as container.ports[0].containerPort.`, + }, + }, + }, + ExactlyOneOf: []string{}, + }, + "timeout_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `Number of seconds after which the probe times out. +Defaults to 1 second. Minimum value is 1. Maximum value is 3600. +Must be smaller than periodSeconds.`, + Default: 1, + }, + }, + }, + }, + "volume_mounts": { + Type: schema.TypeList, + Optional: true, + Description: `Volume to mount into the container's filesystem. +Only supports SecretVolumeSources.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "mount_path": { + Type: schema.TypeString, + Required: true, + Description: `Path within the container at which the volume should be mounted. Must +not contain ':'.`, + }, + "name": { + Type: schema.TypeString, + Required: true, + Description: `This must match the Name of a Volume.`, + }, + }, + }, + }, + "working_dir": { + Type: schema.TypeString, + Optional: true, + Deprecated: "`working_dir` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API.", + ForceNew: true, + Description: `Container's working directory. +If not specified, the container runtime's default will be used, which +might be configured in the container image.`, + }, + }, + }, + }, + "container_concurrency": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `ContainerConcurrency specifies the maximum allowed in-flight (concurrent) +requests per container of the Revision. Values are: +- '0' thread-safe, the system should manage the max concurrency. This is + the default value. +- '1' not-thread-safe. Single concurrency +- '2-N' thread-safe, max concurrency of N`, + }, + "service_account_name": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `Email address of the IAM service account associated with the revision of the +service. The service account represents the identity of the running revision, +and determines what permissions the revision has. If not provided, the revision +will use the project's default service account.`, + }, + "timeout_seconds": { + Type: schema.TypeInt, + Computed: true, + Optional: true, + Description: `TimeoutSeconds holds the max duration the instance is allowed for responding to a request.`, + }, + "volumes": { + Type: schema.TypeList, + Optional: true, + Description: `Volume represents a named volume in a container.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + Description: `Volume's name.`, + }, + "secret": { + Type: schema.TypeList, + Optional: true, + Description: `The secret's value will be presented as the content of a file whose +name is defined in the item path. If no items are defined, the name of +the file is the secret_name.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "secret_name": { + Type: schema.TypeString, + Required: true, + Description: `The name of the secret in Cloud Secret Manager. By default, the secret +is assumed to be in the same project. +If the secret is in another project, you must define an alias. +An alias definition has the form: +{alias}:projects/{project-id|project-number}/secrets/{secret-name}. +If multiple alias definitions are needed, they must be separated by +commas. +The alias definitions must be set on the run.googleapis.com/secrets +annotation.`, + }, + "default_mode": { + Type: schema.TypeInt, + Optional: true, + Description: `Mode bits to use on created files by default. Must be a value between 0000 +and 0777. Defaults to 0644. Directories within the path are not affected by +this setting. This might be in conflict with other options that affect the +file mode, like fsGroup, and the result can be other mode bits set.`, + }, + "items": { + Type: schema.TypeList, + Optional: true, + Description: `If unspecified, the volume will expose a file whose name is the +secret_name. +If specified, the key will be used as the version to fetch from Cloud +Secret Manager and the path will be the name of the file exposed in the +volume. When items are defined, they must specify a key and a path.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Required: true, + Description: `The Cloud Secret Manager secret version. +Can be 'latest' for the latest value or an integer for a specific version.`, + }, + "path": { + Type: schema.TypeString, + Required: true, + Description: `The relative path of the file to map the key to. +May not be an absolute path. +May not contain the path element '..'. +May not start with the string '..'.`, + }, + "mode": { + Type: schema.TypeInt, + Optional: true, + Description: `Mode bits to use on this file, must be a value between 0000 and 0777. If +not specified, the volume defaultMode will be used. This might be in +conflict with other options that affect the file mode, like fsGroup, and +the result can be other mode bits set.`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "serving_state": { + Type: schema.TypeString, + Computed: true, + Deprecated: "`serving_state` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API.", + Description: `ServingState holds a value describing the state the resources +are in for this Revision. +It is expected +that the system will manipulate this based on routability and load.`, + }, + }, + }, + }, + "metadata": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `Optional metadata for this Revision, including labels and annotations. +Name will be generated by the Configuration. To set minimum instances +for this revision, use the "autoscaling.knative.dev/minScale" annotation +key. To set maximum instances for this revision, use the +"autoscaling.knative.dev/maxScale" annotation key. To set Cloud SQL +connections for the revision, use the "run.googleapis.com/cloudsql-instances" +annotation key.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "annotations": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + DiffSuppressFunc: cloudrunTemplateAnnotationDiffSuppress, + Description: `Annotations is a key value map stored with a resource that +may be set by external tools to store and retrieve arbitrary metadata. More +info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations + +**Note**: The Cloud Run API may add additional annotations that were not provided in your config. +If terraform plan shows a diff where a server-side annotation is added, you can add it to your config +or apply the lifecycle.ignore_changes rule to the metadata.0.annotations field. + +Annotations with 'run.googleapis.com/' and 'autoscaling.knative.dev' are restricted. Use the following annotation +keys to configure features on a Revision template: + +- 'autoscaling.knative.dev/maxScale' sets the [maximum number of container + instances](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--max-instances) of the Revision to run. +- 'autoscaling.knative.dev/minScale' sets the [minimum number of container + instances](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--min-instances) of the Revision to run. +- 'run.googleapis.com/client-name' sets the client name calling the Cloud Run API. +- 'run.googleapis.com/cloudsql-instances' sets the [Cloud SQL + instances](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--add-cloudsql-instances) the Revision connects to. +- 'run.googleapis.com/cpu-throttling' sets whether to throttle the CPU when the container is not actively serving + requests. See https://cloud.google.com/sdk/gcloud/reference/run/deploy#--[no-]cpu-throttling. +- 'run.googleapis.com/encryption-key-shutdown-hours' sets the number of hours to wait before an automatic shutdown + server after CMEK key revocation is detected. +- 'run.googleapis.com/encryption-key' sets the [CMEK key](https://cloud.google.com/run/docs/securing/using-cmek) + reference to encrypt the container with. +- 'run.googleapis.com/execution-environment' sets the [execution + environment](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--execution-environment) + where the application will run. +- 'run.googleapis.com/post-key-revocation-action-type' sets the + [action type](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--post-key-revocation-action-type) + after CMEK key revocation. +- 'run.googleapis.com/secrets' sets a list of key-value pairs to set as + [secrets](https://cloud.google.com/run/docs/configuring/secrets#yaml). +- 'run.googleapis.com/sessionAffinity' sets whether to enable + [session affinity](https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--[no-]session-affinity) + for connections to the Revision. +- 'run.googleapis.com/startup-cpu-boost' sets whether to allocate extra CPU to containers on startup. + See https://cloud.google.com/sdk/gcloud/reference/run/deploy#--[no-]cpu-boost. +- 'run.googleapis.com/vpc-access-connector' sets a [VPC connector](https://cloud.google.com/run/docs/configuring/connecting-vpc#terraform_1) + for the Revision. +- 'run.googleapis.com/vpc-access-egress' sets the outbound traffic to send through the VPC connector for this resource. + See https://cloud.google.com/sdk/gcloud/reference/run/deploy#--vpc-egress.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "labels": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + DiffSuppressFunc: cloudrunTemplateLabelDiffSuppress, + Description: `Map of string keys and values that can be used to organize and categorize +(scope and select) objects.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "name": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `Name must be unique within a Google Cloud project and region. +Is required when creating resources. Name is primarily intended +for creation idempotence and configuration definition. Cannot be updated.`, + }, + "namespace": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `In Cloud Run the namespace must be equal to either the +project ID or project number. It will default to the resource's project.`, + }, + "generation": { + Type: schema.TypeInt, + Computed: true, + Description: `A sequence number representing a specific generation of the desired state.`, + }, + "resource_version": { + Type: schema.TypeString, + Computed: true, + Description: `An opaque value that represents the internal version of this object that +can be used by clients to determine when objects have changed. May be used +for optimistic concurrency, change detection, and the watch operation on a +resource or set of resources. They may only be valid for a +particular resource or set of resources.`, + }, + "self_link": { + Type: schema.TypeString, + Computed: true, + Description: `SelfLink is a URL representing this object.`, + }, + "uid": { + Type: schema.TypeString, + Computed: true, + Description: `UID is a unique id generated by the server on successful creation of a resource and is not +allowed to change on PUT operations.`, + }, + }, + }, + }, + }, + }, + }, + "traffic": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `Traffic specifies how to distribute traffic over a collection of Knative Revisions +and Configurations`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "percent": { + Type: schema.TypeInt, + Required: true, + Description: `Percent specifies percent of the traffic to this Revision or Configuration.`, + }, + "latest_revision": { + Type: schema.TypeBool, + Optional: true, + Description: `LatestRevision may be optionally provided to indicate that the latest ready +Revision of the Configuration should be used for this traffic target. When +provided LatestRevision must be true if RevisionName is empty; it must be +false when RevisionName is non-empty.`, + }, + "revision_name": { + Type: schema.TypeString, + Optional: true, + Description: `RevisionName of a specific revision to which to send this portion of traffic.`, + }, + "tag": { + Type: schema.TypeString, + Optional: true, + Description: `Tag is optionally used to expose a dedicated url for referencing this target exclusively.`, + }, + "url": { + Type: schema.TypeString, + Computed: true, + Description: `URL displays the URL for accessing tagged traffic targets. URL is displayed in status, +and is disallowed on spec. URL must contain a scheme (e.g. http://) and a hostname, +but may not contain anything else (e.g. basic auth, url path, etc.)`, + }, + }, + }, + }, + + "metadata": { + Type: schema.TypeList, + Computed: true, + Optional: true, + Description: `Metadata associated with this Service, including name, namespace, labels, +and annotations.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "annotations": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + DiffSuppressFunc: cloudrunAnnotationDiffSuppress, + Description: `Annotations is a key value map stored with a resource that +may be set by external tools to store and retrieve arbitrary metadata. More +info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations + +**Note**: The Cloud Run API may add additional annotations that were not provided in your config. +If terraform plan shows a diff where a server-side annotation is added, you can add it to your config +or apply the lifecycle.ignore_changes rule to the metadata.0.annotations field. + +Annotations with 'run.googleapis.com/' and 'autoscaling.knative.dev' are restricted. Use the following annotation +keys to configure features on a Service: + +- 'run.googleapis.com/binary-authorization-breakglass' sets the [Binary Authorization breakglass](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--breakglass). +- 'run.googleapis.com/binary-authorization' sets the [Binary Authorization](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--binary-authorization). +- 'run.googleapis.com/client-name' sets the client name calling the Cloud Run API. +- 'run.googleapis.com/custom-audiences' sets the [custom audiences](https://cloud.google.com/sdk/gcloud/reference/alpha/run/deploy#--add-custom-audiences) + that can be used in the audience field of ID token for authenticated requests. +- 'run.googleapis.com/description' sets a user defined description for the Service. +- 'run.googleapis.com/ingress' sets the [ingress settings](https://cloud.google.com/sdk/gcloud/reference/run/deploy#--ingress) + for the Service. For example, '"run.googleapis.com/ingress" = "all"'. +- 'run.googleapis.com/launch-stage' sets the [launch stage](https://cloud.google.com/run/docs/troubleshooting#launch-stage-validation) + when a preview feature is used. For example, '"run.googleapis.com/launch-stage": "BETA"'`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "labels": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + DiffSuppressFunc: cloudrunLabelDiffSuppress, + Description: `Map of string keys and values that can be used to organize and categorize +(scope and select) objects. May match selectors of replication controllers +and routes.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "namespace": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `In Cloud Run the namespace must be equal to either the +project ID or project number.`, + }, + "generation": { + Type: schema.TypeInt, + Computed: true, + Description: `A sequence number representing a specific generation of the desired state.`, + }, + "resource_version": { + Type: schema.TypeString, + Computed: true, + Description: `An opaque value that represents the internal version of this object that +can be used by clients to determine when objects have changed. May be used +for optimistic concurrency, change detection, and the watch operation on a +resource or set of resources. They may only be valid for a +particular resource or set of resources.`, + }, + "self_link": { + Type: schema.TypeString, + Computed: true, + Description: `SelfLink is a URL representing this object.`, + }, + "uid": { + Type: schema.TypeString, + Computed: true, + Description: `UID is a unique id generated by the server on successful creation of a resource and is not +allowed to change on PUT operations.`, + }, + }, + }, + }, + "status": { + Type: schema.TypeList, + Computed: true, + Description: `The current status of the Service.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "conditions": { + Type: schema.TypeList, + Computed: true, + Description: `Array of observed Service Conditions, indicating the current ready state of the service.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "message": { + Type: schema.TypeString, + Computed: true, + Description: `Human readable message indicating details about the current status.`, + }, + "reason": { + Type: schema.TypeString, + Computed: true, + Description: `One-word CamelCase reason for the condition's current status.`, + }, + "status": { + Type: schema.TypeString, + Computed: true, + Description: `Status of the condition, one of True, False, Unknown.`, + }, + "type": { + Type: schema.TypeString, + Computed: true, + Description: `Type of domain mapping condition.`, + }, + }, + }, + }, + "latest_created_revision_name": { + Type: schema.TypeString, + Computed: true, + Description: `From ConfigurationStatus. LatestCreatedRevisionName is the last revision that was created +from this Service's Configuration. It might not be ready yet, for that use +LatestReadyRevisionName.`, + }, + "latest_ready_revision_name": { + Type: schema.TypeString, + Computed: true, + Description: `From ConfigurationStatus. LatestReadyRevisionName holds the name of the latest Revision +stamped out from this Service's Configuration that has had its "Ready" condition become +"True".`, + }, + "observed_generation": { + Type: schema.TypeInt, + Computed: true, + Description: `ObservedGeneration is the 'Generation' of the Route that was last processed by the +controller. + +Clients polling for completed reconciliation should poll until observedGeneration = +metadata.generation and the Ready condition's status is True or False.`, + }, + "traffic": { + Type: schema.TypeList, + Computed: true, + Description: `Traffic specifies how to distribute traffic over a collection of Knative Revisions +and Configurations`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "latest_revision": { + Type: schema.TypeBool, + Computed: true, + Description: `LatestRevision may be optionally provided to indicate that the latest ready +Revision of the Configuration should be used for this traffic target. When +provided LatestRevision must be true if RevisionName is empty; it must be +false when RevisionName is non-empty.`, + }, + "percent": { + Type: schema.TypeInt, + Computed: true, + Description: `Percent specifies percent of the traffic to this Revision or Configuration.`, + }, + "revision_name": { + Type: schema.TypeString, + Computed: true, + Description: `RevisionName of a specific revision to which to send this portion of traffic.`, + }, + "tag": { + Type: schema.TypeString, + Computed: true, + Description: `Tag is optionally used to expose a dedicated url for referencing this target exclusively.`, + }, + "url": { + Type: schema.TypeString, + Computed: true, + Description: `URL displays the URL for accessing tagged traffic targets. URL is displayed in status, +and is disallowed on spec. URL must contain a scheme (e.g. http://) and a hostname, +but may not contain anything else (e.g. basic auth, url path, etc.)`, + }, + }, + }, + }, + "url": { + Type: schema.TypeString, + Computed: true, + Description: `From RouteStatus. URL holds the url that will distribute traffic over the provided traffic +targets. It generally has the form +https://{route-hash}-{project-hash}-{cluster-level-suffix}.a.run.app`, + }, + }, + }, + }, + "autogenerate_revision_name": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `If set to 'true', the revision name (template.metadata.name) will be omitted and +autogenerated by Cloud Run. This cannot be set to 'true' while 'template.metadata.name' +is also set. +(For legacy support, if 'template.metadata.name' is unset in state while +this field is set to false, the revision name will still autogenerate.)`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func upgradeAnnotations(rawMetadata map[string]interface{}) { + rawAnnotations := rawMetadata["annotations"] + if rawAnnotations != nil { + annotations := make(map[string]interface{}) + effectiveAnnotations := make(map[string]interface{}) + + for k, v := range rawAnnotations.(map[string]interface{}) { + effectiveAnnotations[k] = v + + if !(cloudRunGoogleProvidedAnnotations.MatchString(k) || (strings.HasSuffix(k, "run.googleapis.com/ingress") && v == "all")) { + annotations[k] = v + } + } + + rawMetadata["annotations"] = annotations + rawMetadata["effective_annotations"] = effectiveAnnotations + } +} + +func ResourceCloudRunServiceUpgradeV1(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + log.Printf("[DEBUG] Attributes before migration: %#v", rawState) + + if rawState["metadata"] != nil { + rawMetadatas := rawState["metadata"].([]interface{}) + + // Upgrade labels fields + if len(rawMetadatas) > 0 && rawMetadatas[0] != nil { + rawMetadata := rawMetadatas[0].(map[string]interface{}) + + rawLabels := rawMetadata["labels"] + if rawLabels != nil { + labels := make(map[string]interface{}) + effectiveLabels := make(map[string]interface{}) + + for k, v := range rawLabels.(map[string]interface{}) { + effectiveLabels[k] = v + + if !cloudRunGoogleProvidedLabels.MatchString(k) { + labels[k] = v + } + } + + rawMetadata["labels"] = labels + rawMetadata["effective_labels"] = effectiveLabels + } + + upgradeAnnotations(rawMetadata) + + rawState["metadata"] = []interface{}{rawMetadata} + } + } + + log.Printf("[DEBUG] Attributes after migration: %#v", rawState) + return rawState, nil +} diff --git a/google/services/cloudrun/resource_cloud_run_service_generated_test.go b/google/services/cloudrun/resource_cloud_run_service_generated_test.go index 46c42d12665..d9faa22793c 100644 --- a/google/services/cloudrun/resource_cloud_run_service_generated_test.go +++ b/google/services/cloudrun/resource_cloud_run_service_generated_test.go @@ -51,7 +51,7 @@ func TestAccCloudRunService_cloudRunServiceBasicExample(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) @@ -99,7 +99,7 @@ func TestAccCloudRunService_cloudRunServiceSqlExample(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name"}, + ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) @@ -162,7 +162,7 @@ func TestAccCloudRunService_cloudRunServiceNoauthExample(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) @@ -223,7 +223,7 @@ func TestAccCloudRunService_cloudRunServiceMultipleEnvironmentVariablesExample(t ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name"}, + ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) @@ -292,7 +292,7 @@ func TestAccCloudRunService_cloudRunServiceSecretEnvironmentVariablesExample(t * ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name"}, + ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) @@ -386,7 +386,7 @@ func TestAccCloudRunService_cloudRunServiceSecretVolumesExample(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name"}, + ImportStateVerifyIgnore: []string{"name", "location", "autogenerate_revision_name", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) @@ -487,7 +487,7 @@ func TestAccCloudRunService_cloudRunServiceProbesExample(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "metadata.0.labels", "metadata.0.annotations", "metadata.0.terraform_labels"}, }, }, }) diff --git a/google/services/cloudrun/resource_cloud_run_service_test.go b/google/services/cloudrun/resource_cloud_run_service_test.go index 4df15118f48..786fbb751fd 100644 --- a/google/services/cloudrun/resource_cloud_run_service_test.go +++ b/google/services/cloudrun/resource_cloud_run_service_test.go @@ -28,7 +28,7 @@ func TestAccCloudRunService_cloudRunServiceUpdate(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, { Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "50", "300"), @@ -37,7 +37,7 @@ func TestAccCloudRunService_cloudRunServiceUpdate(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, }, }) @@ -62,7 +62,7 @@ func TestAccCloudRunService_cloudRunServiceCreateHasStatus(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels"}, }, }, }) @@ -86,7 +86,7 @@ func TestAccCloudRunService_foregroundDeletion(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, { Config: " ", // very explicitly add a space, as the test runner fails if this is just "" @@ -98,7 +98,7 @@ func TestAccCloudRunService_foregroundDeletion(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, }, }) @@ -111,10 +111,14 @@ resource "google_cloud_run_service" "default" { location = "us-central1" metadata { - namespace = "%s" - annotations = { + namespace = "%s" + annotations = { generated-by = "magic-modules" } + labels = { + env = "foo" + default_expiration_ms = 3600000 + } } template { @@ -162,7 +166,7 @@ func TestAccCloudRunService_secretVolume(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, { Config: testAccCloudRunService_cloudRunServiceUpdateWithSecretVolume(name, project, "secret-"+acctest.RandString(t, 10), "secret-"+acctest.RandString(t, 11), "google_secret_manager_secret.secret2.secret_id"), @@ -171,7 +175,7 @@ func TestAccCloudRunService_secretVolume(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, }, }) @@ -286,7 +290,7 @@ func TestAccCloudRunService_secretEnvironmentVariable(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, { Config: testAccCloudRunService_cloudRunServiceUpdateWithSecretEnvVar(name, project, "secret-"+acctest.RandString(t, 10), "secret-"+acctest.RandString(t, 11), "google_secret_manager_secret.secret2.secret_id"), @@ -295,7 +299,7 @@ func TestAccCloudRunService_secretEnvironmentVariable(t *testing.T) { ResourceName: "google_cloud_run_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "status.0.conditions"}, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, }, }, }) @@ -387,3 +391,311 @@ resource "google_cloud_run_service" "default" { } `, secretName1, secretName2, name, secretRef, project) } + +func TestAccCloudRunService_withProviderDefaultLabels(t *testing.T) { + // The test failed if VCR testing is enabled, because the cached provider config is used. + // With the cached provider config, any changes in the provider default labels will not be applied. + acctest.SkipIfVcr(t) + t.Parallel() + + context := map[string]interface{}{ + "project": envvar.GetTestProjectFromEnv(), + "random_suffix": acctest.RandString(t, 10), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Steps: []resource.TestStep{ + { + Config: testAccCloudRunService_withProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.%", "2"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_key1", "default_value1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_labels.%", "4"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.annotations.%", "1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.annotations.generated-by", "magic-modules"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_annotations.%", "6"), + ), + }, + { + ResourceName: "google_cloud_run_service.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, + }, + { + Config: testAccCloudRunService_resourceLabelsOverridesProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_expiration_ms", "3600000"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_key1", "value1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_labels.%", "4"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.annotations.%", "1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.annotations.generated-by", "magic-modules-update"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_annotations.%", "6"), + ), + }, + { + ResourceName: "google_cloud_run_service.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, + }, + { + Config: testAccCloudRunService_moveResourceLabelToProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.%", "2"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_expiration_ms", "3600000"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_key1", "value1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_labels.%", "4"), + ), + }, + { + ResourceName: "google_cloud_run_service.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, + }, + { + Config: testAccCloudRunService_resourceLabelsOverridesProviderDefaultLabels(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_expiration_ms", "3600000"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_key1", "value1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_labels.%", "4"), + ), + }, + { + ResourceName: "google_cloud_run_service.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, + }, + { + Config: testAccCloudRunService_cloudRunServiceBasic(context), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_cloud_run_service.default", "metadata.0.labels.%"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_labels.%", "1"), + + resource.TestCheckNoResourceAttr("google_cloud_run_service.default", "metadata.0.annotations.%"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_annotations.%", "5"), + ), + }, + { + ResourceName: "google_cloud_run_service.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"metadata.0.resource_version", "metadata.0.annotations", "metadata.0.labels", "metadata.0.terraform_labels", "status.0.conditions"}, + }, + }, + }) +} + +func TestAccCloudRunServiceMigration_withLabels(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + name := "tftest-cloudrun-" + acctest.RandString(t, 6) + project := envvar.GetTestProjectFromEnv() + oldVersion := map[string]resource.ExternalProvider{ + "google": { + VersionConstraint: "4.83.0", // a version that doesn't separate user defined labels and system labels + Source: "registry.terraform.io/hashicorp/google", + }, + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + Steps: []resource.TestStep{ + { + Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "10", "600"), + ExternalProviders: oldVersion, + }, + { + Config: testAccCloudRunService_cloudRunServiceUpdate(name, project, "10", "600"), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.labels.%", "2"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_labels.%", "3"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.annotations.%", "1"), + resource.TestCheckResourceAttr("google_cloud_run_service.default", "metadata.0.effective_annotations.%", "6"), + ), + }, + }, + }) +} + +func testAccCloudRunService_withProviderDefaultLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + +resource "google_cloud_run_service" "default" { + name = "tf-test-cloudrun-srv%{random_suffix}" + location = "us-central1" + + template { + spec { + containers { + image = "us-docker.pkg.dev/cloudrun/container/hello" + } + } + } + + metadata { + namespace = "%{project}" + annotations = { + generated-by = "magic-modules" + } + labels = { + env = "foo" + default_expiration_ms = 3600000 + } + } + + traffic { + percent = 100 + latest_revision = true + } +} +`, context) +} + +func testAccCloudRunService_resourceLabelsOverridesProviderDefaultLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + +resource "google_cloud_run_service" "default" { + name = "tf-test-cloudrun-srv%{random_suffix}" + location = "us-central1" + + template { + spec { + containers { + image = "us-docker.pkg.dev/cloudrun/container/hello" + } + } + } + + metadata { + namespace = "%{project}" + annotations = { + generated-by = "magic-modules-update" + } + labels = { + env = "foo" + default_expiration_ms = 3600000 + default_key1 = "value1" + } + } + + traffic { + percent = 100 + latest_revision = true + } +} +`, context) +} + +func testAccCloudRunService_moveResourceLabelToProviderDefaultLabels(context map[string]interface{}) string { + return acctest.Nprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + env = "foo" + } +} + +resource "google_cloud_run_service" "default" { + name = "tf-test-cloudrun-srv%{random_suffix}" + location = "us-central1" + + template { + spec { + containers { + image = "us-docker.pkg.dev/cloudrun/container/hello" + } + } + } + + metadata { + namespace = "%{project}" + annotations = { + generated-by = "magic-modules" + } + labels = { + default_expiration_ms = 3600000 + default_key1 = "value1" + } + } + + traffic { + percent = 100 + latest_revision = true + } +} +`, context) +} + +func testAccCloudRunService_cloudRunServiceBasic(context map[string]interface{}) string { + return acctest.Nprintf(` +resource "google_cloud_run_service" "default" { + name = "tf-test-cloudrun-srv%{random_suffix}" + location = "us-central1" + + template { + spec { + containers { + image = "us-docker.pkg.dev/cloudrun/container/hello" + } + } + } + + metadata { + namespace = "%{project}" + } + + traffic { + percent = 100 + latest_revision = true + } +} +`, context) +} diff --git a/google/services/cloudrunv2/resource_cloud_run_v2_job.go b/google/services/cloudrunv2/resource_cloud_run_v2_job.go index f81729892fa..acf6037ffe4 100644 --- a/google/services/cloudrunv2/resource_cloud_run_v2_job.go +++ b/google/services/cloudrunv2/resource_cloud_run_v2_job.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,12 @@ func ResourceCloudRunV2Job() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.SetAnnotationsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -146,93 +153,6 @@ func ResourceCloudRunV2Job() *schema.Resource { }, }, }, - "liveness_probe": { - Type: schema.TypeList, - Optional: true, - Deprecated: "`liveness_probe` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API.", - Description: `Periodic probe of container liveness. Container will be restarted if the probe fails. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes -This field is not supported in Cloud Run Job currently.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "failure_threshold": { - Type: schema.TypeInt, - Optional: true, - Description: `Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.`, - Default: 3, - }, - "http_get": { - Type: schema.TypeList, - Optional: true, - Description: `HTTPGet specifies the http request to perform. Exactly one of HTTPGet or TCPSocket must be specified.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "http_headers": { - Type: schema.TypeList, - Optional: true, - Description: `Custom headers to set in the request. HTTP allows repeated headers.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - Description: `The header field name`, - }, - "value": { - Type: schema.TypeString, - Optional: true, - Description: `The header field value`, - Default: "", - }, - }, - }, - }, - "path": { - Type: schema.TypeString, - Optional: true, - Description: `Path to access on the HTTP server. Defaults to '/'.`, - Default: "/", - }, - }, - }, - }, - "initial_delay_seconds": { - Type: schema.TypeInt, - Optional: true, - Description: `Number of seconds after the container has started before the probe is initiated. Defaults to 0 seconds. Minimum value is 0. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes`, - Default: 0, - }, - "period_seconds": { - Type: schema.TypeInt, - Optional: true, - Description: `How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. Must be greater or equal than timeoutSeconds`, - Default: 10, - }, - "tcp_socket": { - Type: schema.TypeList, - Optional: true, - Description: `TCPSocket specifies an action involving a TCP port. Exactly one of HTTPGet or TCPSocket must be specified.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "port": { - Type: schema.TypeInt, - Optional: true, - Description: `Port number to access on the container. Must be in the range 1 to 65535. If not specified, defaults to 8080.`, - }, - }, - }, - }, - "timeout_seconds": { - Type: schema.TypeInt, - Optional: true, - Description: `Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Maximum value is 3600. Must be smaller than periodSeconds. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes`, - Default: 1, - }, - }, - }, - }, "name": { Type: schema.TypeString, Optional: true, @@ -277,95 +197,6 @@ If omitted, a port number will be chosen and passed to the container through the }, }, }, - "startup_probe": { - Type: schema.TypeList, - Computed: true, - Optional: true, - Deprecated: "`startup_probe` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API.", - Description: `Startup probe of application within the container. All other probes are disabled if a startup probe is provided, until it succeeds. Container will not be added to service endpoints if the probe fails. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes -This field is not supported in Cloud Run Job currently.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "failure_threshold": { - Type: schema.TypeInt, - Optional: true, - Description: `Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.`, - Default: 3, - }, - "http_get": { - Type: schema.TypeList, - Optional: true, - Description: `HTTPGet specifies the http request to perform. Exactly one of HTTPGet or TCPSocket must be specified.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "http_headers": { - Type: schema.TypeList, - Optional: true, - Description: `Custom headers to set in the request. HTTP allows repeated headers.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - Description: `The header field name`, - }, - "value": { - Type: schema.TypeString, - Optional: true, - Description: `The header field value`, - Default: "", - }, - }, - }, - }, - "path": { - Type: schema.TypeString, - Optional: true, - Description: `Path to access on the HTTP server. Defaults to '/'.`, - Default: "/", - }, - }, - }, - }, - "initial_delay_seconds": { - Type: schema.TypeInt, - Optional: true, - Description: `Number of seconds after the container has started before the probe is initiated. Defaults to 0 seconds. Minimum value is 0. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes`, - Default: 0, - }, - "period_seconds": { - Type: schema.TypeInt, - Optional: true, - Description: `How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. Must be greater or equal than timeoutSeconds`, - Default: 10, - }, - "tcp_socket": { - Type: schema.TypeList, - Optional: true, - Description: `TCPSocket specifies an action involving a TCP port. Exactly one of HTTPGet or TCPSocket must be specified.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "port": { - Type: schema.TypeInt, - Computed: true, - Optional: true, - Description: `Port number to access on the container. Must be in the range 1 to 65535. If not specified, defaults to 8080.`, - }, - }, - }, - }, - "timeout_seconds": { - Type: schema.TypeInt, - Optional: true, - Description: `Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Maximum value is 3600. Must be smaller than periodSeconds. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes`, - Default: 1, - }, - }, - }, - }, "volume_mounts": { Type: schema.TypeList, Optional: true, @@ -644,7 +475,10 @@ This field follows Kubernetes annotations' namespacing, limits, and rules.`, environment, state, etc. For more information, visit https://cloud.google.com/resource-manager/docs/creating-managing-labels or https://cloud.google.com/run/docs/configuring/labels. Cloud Run API v2 does not support labels with 'run.googleapis.com', 'cloud.googleapis.com', 'serving.knative.dev', or 'autoscaling.knative.dev' namespaces, and they will be rejected. -All system labels in v1 now have a corresponding field in v2 Job.`, +All system labels in v1 now have a corresponding field in v2 Job. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "launch_stage": { @@ -729,6 +563,18 @@ A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to n Computed: true, Description: `The deletion time.`, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -849,6 +695,13 @@ A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to n }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -878,18 +731,6 @@ func resourceCloudRunV2JobCreate(d *schema.ResourceData, meta interface{}) error } obj := make(map[string]interface{}) - labelsProp, err := expandCloudRunV2JobLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandCloudRunV2JobAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } clientProp, err := expandCloudRunV2JobClient(d.Get("client"), d, config) if err != nil { return err @@ -920,6 +761,18 @@ func resourceCloudRunV2JobCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("template"); !tpgresource.IsEmptyValue(reflect.ValueOf(templateProp)) && (ok || !reflect.DeepEqual(v, templateProp)) { obj["template"] = templateProp } + labelsProp, err := expandCloudRunV2JobEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandCloudRunV2JobEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CloudRunV2BasePath}}projects/{{project}}/locations/{{location}}/jobs?jobId={{name}}") if err != nil { @@ -1091,6 +944,15 @@ func resourceCloudRunV2JobRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("etag", flattenCloudRunV2JobEtag(res["etag"], d, config)); err != nil { return fmt.Errorf("Error reading Job: %s", err) } + if err := d.Set("terraform_labels", flattenCloudRunV2JobTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Job: %s", err) + } + if err := d.Set("effective_labels", flattenCloudRunV2JobEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Job: %s", err) + } + if err := d.Set("effective_annotations", flattenCloudRunV2JobEffectiveAnnotations(res["annotations"], d, config)); err != nil { + return fmt.Errorf("Error reading Job: %s", err) + } return nil } @@ -1111,18 +973,6 @@ func resourceCloudRunV2JobUpdate(d *schema.ResourceData, meta interface{}) error billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandCloudRunV2JobLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandCloudRunV2JobAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } clientProp, err := expandCloudRunV2JobClient(d.Get("client"), d, config) if err != nil { return err @@ -1153,6 +1003,18 @@ func resourceCloudRunV2JobUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("template"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, templateProp)) { obj["template"] = templateProp } + labelsProp, err := expandCloudRunV2JobEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandCloudRunV2JobEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CloudRunV2BasePath}}projects/{{project}}/locations/{{location}}/jobs/{{name}}") if err != nil { @@ -1249,9 +1111,9 @@ func resourceCloudRunV2JobDelete(d *schema.ResourceData, meta interface{}) error func resourceCloudRunV2JobImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/jobs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/jobs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1275,11 +1137,33 @@ func flattenCloudRunV2JobGeneration(v interface{}, d *schema.ResourceData, confi } func flattenCloudRunV2JobLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudRunV2JobAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudRunV2JobCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1444,17 +1328,15 @@ func flattenCloudRunV2JobTemplateTemplateContainers(v interface{}, d *schema.Res continue } transformed = append(transformed, map[string]interface{}{ - "name": flattenCloudRunV2JobTemplateTemplateContainersName(original["name"], d, config), - "image": flattenCloudRunV2JobTemplateTemplateContainersImage(original["image"], d, config), - "command": flattenCloudRunV2JobTemplateTemplateContainersCommand(original["command"], d, config), - "args": flattenCloudRunV2JobTemplateTemplateContainersArgs(original["args"], d, config), - "env": flattenCloudRunV2JobTemplateTemplateContainersEnv(original["env"], d, config), - "resources": flattenCloudRunV2JobTemplateTemplateContainersResources(original["resources"], d, config), - "ports": flattenCloudRunV2JobTemplateTemplateContainersPorts(original["ports"], d, config), - "volume_mounts": flattenCloudRunV2JobTemplateTemplateContainersVolumeMounts(original["volumeMounts"], d, config), - "working_dir": flattenCloudRunV2JobTemplateTemplateContainersWorkingDir(original["workingDir"], d, config), - "liveness_probe": flattenCloudRunV2JobTemplateTemplateContainersLivenessProbe(original["livenessProbe"], d, config), - "startup_probe": flattenCloudRunV2JobTemplateTemplateContainersStartupProbe(original["startupProbe"], d, config), + "name": flattenCloudRunV2JobTemplateTemplateContainersName(original["name"], d, config), + "image": flattenCloudRunV2JobTemplateTemplateContainersImage(original["image"], d, config), + "command": flattenCloudRunV2JobTemplateTemplateContainersCommand(original["command"], d, config), + "args": flattenCloudRunV2JobTemplateTemplateContainersArgs(original["args"], d, config), + "env": flattenCloudRunV2JobTemplateTemplateContainersEnv(original["env"], d, config), + "resources": flattenCloudRunV2JobTemplateTemplateContainersResources(original["resources"], d, config), + "ports": flattenCloudRunV2JobTemplateTemplateContainersPorts(original["ports"], d, config), + "volume_mounts": flattenCloudRunV2JobTemplateTemplateContainersVolumeMounts(original["volumeMounts"], d, config), + "working_dir": flattenCloudRunV2JobTemplateTemplateContainersWorkingDir(original["workingDir"], d, config), }) } return transformed @@ -1627,7 +1509,31 @@ func flattenCloudRunV2JobTemplateTemplateContainersWorkingDir(v interface{}, d * return v } -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbe(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenCloudRunV2JobTemplateTemplateVolumes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "name": flattenCloudRunV2JobTemplateTemplateVolumesName(original["name"], d, config), + "secret": flattenCloudRunV2JobTemplateTemplateVolumesSecret(original["secret"], d, config), + "cloud_sql_instance": flattenCloudRunV2JobTemplateTemplateVolumesCloudSqlInstance(original["cloudSqlInstance"], d, config), + }) + } + return transformed +} +func flattenCloudRunV2JobTemplateTemplateVolumesName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenCloudRunV2JobTemplateTemplateVolumesSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil } @@ -1636,38 +1542,19 @@ func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbe(v interface{}, return nil } transformed := make(map[string]interface{}) - transformed["initial_delay_seconds"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeInitialDelaySeconds(original["initialDelaySeconds"], d, config) - transformed["timeout_seconds"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTimeoutSeconds(original["timeoutSeconds"], d, config) - transformed["period_seconds"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbePeriodSeconds(original["periodSeconds"], d, config) - transformed["failure_threshold"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeFailureThreshold(original["failureThreshold"], d, config) - transformed["http_get"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGet(original["httpGet"], d, config) - transformed["tcp_socket"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocket(original["tcpSocket"], d, config) + transformed["secret"] = + flattenCloudRunV2JobTemplateTemplateVolumesSecretSecret(original["secret"], d, config) + transformed["default_mode"] = + flattenCloudRunV2JobTemplateTemplateVolumesSecretDefaultMode(original["defaultMode"], d, config) + transformed["items"] = + flattenCloudRunV2JobTemplateTemplateVolumesSecretItems(original["items"], d, config) return []interface{}{transformed} } -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeInitialDelaySeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise +func flattenCloudRunV2JobTemplateTemplateVolumesSecretSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTimeoutSeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenCloudRunV2JobTemplateTemplateVolumesSecretDefaultMode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { // Handles the string fixed64 format if strVal, ok := v.(string); ok { if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { @@ -1684,24 +1571,35 @@ func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTimeoutSeconds(v return v // let terraform core handle it otherwise } -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbePeriodSeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } +func flattenCloudRunV2JobTemplateTemplateVolumesSecretItems(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal + l := v.([]interface{}) + transformed := make([]interface{}, 0, len(l)) + for _, raw := range l { + original := raw.(map[string]interface{}) + if len(original) < 1 { + // Do not include empty json objects coming back from the api + continue + } + transformed = append(transformed, map[string]interface{}{ + "path": flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsPath(original["path"], d, config), + "version": flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsVersion(original["version"], d, config), + "mode": flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsMode(original["mode"], d, config), + }) } + return transformed +} +func flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} - return v // let terraform core handle it otherwise +func flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsVersion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeFailureThreshold(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsMode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { // Handles the string fixed64 format if strVal, ok := v.(string); ok { if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { @@ -1718,345 +1616,7 @@ func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeFailureThreshold return v // let terraform core handle it otherwise } -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGet(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - transformed["path"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetPath(original["path"], d, config) - transformed["http_headers"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeaders(original["httpHeaders"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeaders(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "name": flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersName(original["name"], d, config), - "value": flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersValue(original["value"], d, config), - }) - } - return transformed -} -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocket(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - transformed["port"] = - flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocketPort(original["port"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocketPort(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbe(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["initial_delay_seconds"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeInitialDelaySeconds(original["initialDelaySeconds"], d, config) - transformed["timeout_seconds"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeTimeoutSeconds(original["timeoutSeconds"], d, config) - transformed["period_seconds"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbePeriodSeconds(original["periodSeconds"], d, config) - transformed["failure_threshold"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeFailureThreshold(original["failureThreshold"], d, config) - transformed["http_get"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGet(original["httpGet"], d, config) - transformed["tcp_socket"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocket(original["tcpSocket"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeInitialDelaySeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeTimeoutSeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbePeriodSeconds(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeFailureThreshold(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGet(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - transformed["path"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetPath(original["path"], d, config) - transformed["http_headers"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeaders(original["httpHeaders"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeaders(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "name": flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersName(original["name"], d, config), - "value": flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersValue(original["value"], d, config), - }) - } - return transformed -} -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocket(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - transformed["port"] = - flattenCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocketPort(original["port"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocketPort(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateVolumes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "name": flattenCloudRunV2JobTemplateTemplateVolumesName(original["name"], d, config), - "secret": flattenCloudRunV2JobTemplateTemplateVolumesSecret(original["secret"], d, config), - "cloud_sql_instance": flattenCloudRunV2JobTemplateTemplateVolumesCloudSqlInstance(original["cloudSqlInstance"], d, config), - }) - } - return transformed -} -func flattenCloudRunV2JobTemplateTemplateVolumesName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateVolumesSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["secret"] = - flattenCloudRunV2JobTemplateTemplateVolumesSecretSecret(original["secret"], d, config) - transformed["default_mode"] = - flattenCloudRunV2JobTemplateTemplateVolumesSecretDefaultMode(original["defaultMode"], d, config) - transformed["items"] = - flattenCloudRunV2JobTemplateTemplateVolumesSecretItems(original["items"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2JobTemplateTemplateVolumesSecretSecret(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateVolumesSecretDefaultMode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateVolumesSecretItems(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "path": flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsPath(original["path"], d, config), - "version": flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsVersion(original["version"], d, config), - "mode": flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsMode(original["mode"], d, config), - }) - } - return transformed -} -func flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsVersion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenCloudRunV2JobTemplateTemplateVolumesSecretItemsMode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenCloudRunV2JobTemplateTemplateVolumesCloudSqlInstance(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenCloudRunV2JobTemplateTemplateVolumesCloudSqlInstance(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil } @@ -2337,26 +1897,27 @@ func flattenCloudRunV2JobEtag(v interface{}, d *schema.ResourceData, config *tra return v } -func expandCloudRunV2JobLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenCloudRunV2JobTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed } -func expandCloudRunV2JobAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil +func flattenCloudRunV2JobEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenCloudRunV2JobEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandCloudRunV2JobClient(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -2609,204 +2170,18 @@ func expandCloudRunV2JobTemplateTemplateContainers(v interface{}, d tpgresource. transformed["ports"] = transformedPorts } - transformedVolumeMounts, err := expandCloudRunV2JobTemplateTemplateContainersVolumeMounts(original["volume_mounts"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedVolumeMounts); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["volumeMounts"] = transformedVolumeMounts - } - - transformedWorkingDir, err := expandCloudRunV2JobTemplateTemplateContainersWorkingDir(original["working_dir"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedWorkingDir); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["workingDir"] = transformedWorkingDir - } - - transformedLivenessProbe, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbe(original["liveness_probe"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLivenessProbe); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["livenessProbe"] = transformedLivenessProbe - } - - transformedStartupProbe, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbe(original["startup_probe"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedStartupProbe); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["startupProbe"] = transformedStartupProbe - } - - req = append(req, transformed) - } - return req, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersImage(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersCommand(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersArgs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnv(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedName, err := expandCloudRunV2JobTemplateTemplateContainersEnvName(original["name"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["name"] = transformedName - } - - transformedValue, err := expandCloudRunV2JobTemplateTemplateContainersEnvValue(original["value"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["value"] = transformedValue - } - - transformedValueSource, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSource(original["value_source"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedValueSource); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["valueSource"] = transformedValueSource - } - - req = append(req, transformed) - } - return req, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnvName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnvValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnvValueSource(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedSecretKeyRef, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRef(original["secret_key_ref"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedSecretKeyRef); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["secretKeyRef"] = transformedSecretKeyRef - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRef(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedSecret, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefSecret(original["secret"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedSecret); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["secret"] = transformedSecret - } - - transformedVersion, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefVersion(original["version"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedVersion); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["version"] = transformedVersion - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefSecret(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefVersion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedLimits, err := expandCloudRunV2JobTemplateTemplateContainersResourcesLimits(original["limits"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLimits); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["limits"] = transformedLimits - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersResourcesLimits(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersPorts(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedName, err := expandCloudRunV2JobTemplateTemplateContainersPortsName(original["name"], d, config) + transformedVolumeMounts, err := expandCloudRunV2JobTemplateTemplateContainersVolumeMounts(original["volume_mounts"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["name"] = transformedName + } else if val := reflect.ValueOf(transformedVolumeMounts); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["volumeMounts"] = transformedVolumeMounts } - transformedContainerPort, err := expandCloudRunV2JobTemplateTemplateContainersPortsContainerPort(original["container_port"], d, config) + transformedWorkingDir, err := expandCloudRunV2JobTemplateTemplateContainersWorkingDir(original["working_dir"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedContainerPort); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["containerPort"] = transformedContainerPort + } else if val := reflect.ValueOf(transformedWorkingDir); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["workingDir"] = transformedWorkingDir } req = append(req, transformed) @@ -2814,15 +2189,23 @@ func expandCloudRunV2JobTemplateTemplateContainersPorts(v interface{}, d tpgreso return req, nil } -func expandCloudRunV2JobTemplateTemplateContainersPortsName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersPortsContainerPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersImage(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersVolumeMounts(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersCommand(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandCloudRunV2JobTemplateTemplateContainersArgs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandCloudRunV2JobTemplateTemplateContainersEnv(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) for _, raw := range l { @@ -2832,18 +2215,25 @@ func expandCloudRunV2JobTemplateTemplateContainersVolumeMounts(v interface{}, d original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedName, err := expandCloudRunV2JobTemplateTemplateContainersVolumeMountsName(original["name"], d, config) + transformedName, err := expandCloudRunV2JobTemplateTemplateContainersEnvName(original["name"], d, config) if err != nil { return nil, err } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { transformed["name"] = transformedName } - transformedMountPath, err := expandCloudRunV2JobTemplateTemplateContainersVolumeMountsMountPath(original["mount_path"], d, config) + transformedValue, err := expandCloudRunV2JobTemplateTemplateContainersEnvValue(original["value"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedMountPath); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["mountPath"] = transformedMountPath + } else if val := reflect.ValueOf(transformedValue); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["value"] = transformedValue + } + + transformedValueSource, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSource(original["value_source"], d, config) + if err != nil { + return nil, err + } else if val := reflect.ValueOf(transformedValueSource); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["valueSource"] = transformedValueSource } req = append(req, transformed) @@ -2851,19 +2241,15 @@ func expandCloudRunV2JobTemplateTemplateContainersVolumeMounts(v interface{}, d return req, nil } -func expandCloudRunV2JobTemplateTemplateContainersVolumeMountsName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersVolumeMountsMountPath(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersEnvName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersWorkingDir(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersEnvValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbe(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersEnvValueSource(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { return nil, nil @@ -2872,103 +2258,81 @@ func expandCloudRunV2JobTemplateTemplateContainersLivenessProbe(v interface{}, d original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedInitialDelaySeconds, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeInitialDelaySeconds(original["initial_delay_seconds"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedInitialDelaySeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["initialDelaySeconds"] = transformedInitialDelaySeconds - } - - transformedTimeoutSeconds, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeTimeoutSeconds(original["timeout_seconds"], d, config) + transformedSecretKeyRef, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRef(original["secret_key_ref"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedTimeoutSeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["timeoutSeconds"] = transformedTimeoutSeconds + } else if val := reflect.ValueOf(transformedSecretKeyRef); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["secretKeyRef"] = transformedSecretKeyRef } - transformedPeriodSeconds, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbePeriodSeconds(original["period_seconds"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPeriodSeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["periodSeconds"] = transformedPeriodSeconds - } + return transformed, nil +} - transformedFailureThreshold, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeFailureThreshold(original["failure_threshold"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedFailureThreshold); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["failureThreshold"] = transformedFailureThreshold +func expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRef(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + l := v.([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil, nil } + raw := l[0] + original := raw.(map[string]interface{}) + transformed := make(map[string]interface{}) - transformedHttpGet, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGet(original["http_get"], d, config) + transformedSecret, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefSecret(original["secret"], d, config) if err != nil { return nil, err - } else { - transformed["httpGet"] = transformedHttpGet + } else if val := reflect.ValueOf(transformedSecret); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["secret"] = transformedSecret } - transformedTcpSocket, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocket(original["tcp_socket"], d, config) + transformedVersion, err := expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefVersion(original["version"], d, config) if err != nil { return nil, err - } else { - transformed["tcpSocket"] = transformedTcpSocket + } else if val := reflect.ValueOf(transformedVersion); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["version"] = transformedVersion } return transformed, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeInitialDelaySeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeTimeoutSeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbePeriodSeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefSecret(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeFailureThreshold(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersEnvValueSourceSecretKeyRefVersion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGet(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersResources(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) - if len(l) == 0 { + if len(l) == 0 || l[0] == nil { return nil, nil } - - if l[0] == nil { - transformed := make(map[string]interface{}) - return transformed, nil - } raw := l[0] original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedPath, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetPath(original["path"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPath); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["path"] = transformedPath - } - - transformedHttpHeaders, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeaders(original["http_headers"], d, config) + transformedLimits, err := expandCloudRunV2JobTemplateTemplateContainersResourcesLimits(original["limits"], d, config) if err != nil { return nil, err - } else if val := reflect.ValueOf(transformedHttpHeaders); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["httpHeaders"] = transformedHttpHeaders + } else if val := reflect.ValueOf(transformedLimits); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["limits"] = transformedLimits } return transformed, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetPath(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil +func expandCloudRunV2JobTemplateTemplateContainersResourcesLimits(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeaders(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersPorts(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) for _, raw := range l { @@ -2978,18 +2342,18 @@ func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeader original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedName, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersName(original["name"], d, config) + transformedName, err := expandCloudRunV2JobTemplateTemplateContainersPortsName(original["name"], d, config) if err != nil { return nil, err } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { transformed["name"] = transformedName } - transformedValue, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersValue(original["value"], d, config) + transformedContainerPort, err := expandCloudRunV2JobTemplateTemplateContainersPortsContainerPort(original["container_port"], d, config) if err != nil { return nil, err - } else { - transformed["value"] = transformedValue + } else if val := reflect.ValueOf(transformedContainerPort); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["containerPort"] = transformedContainerPort } req = append(req, transformed) @@ -2997,148 +2361,15 @@ func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeader return req, nil } -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeHttpGetHttpHeadersValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocket(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 { - return nil, nil - } - - if l[0] == nil { - transformed := make(map[string]interface{}) - return transformed, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPort, err := expandCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocketPort(original["port"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPort); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["port"] = transformedPort - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersLivenessProbeTcpSocketPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbe(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedInitialDelaySeconds, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeInitialDelaySeconds(original["initial_delay_seconds"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedInitialDelaySeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["initialDelaySeconds"] = transformedInitialDelaySeconds - } - - transformedTimeoutSeconds, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeTimeoutSeconds(original["timeout_seconds"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedTimeoutSeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["timeoutSeconds"] = transformedTimeoutSeconds - } - - transformedPeriodSeconds, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbePeriodSeconds(original["period_seconds"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPeriodSeconds); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["periodSeconds"] = transformedPeriodSeconds - } - - transformedFailureThreshold, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeFailureThreshold(original["failure_threshold"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedFailureThreshold); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["failureThreshold"] = transformedFailureThreshold - } - - transformedHttpGet, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGet(original["http_get"], d, config) - if err != nil { - return nil, err - } else { - transformed["httpGet"] = transformedHttpGet - } - - transformedTcpSocket, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocket(original["tcp_socket"], d, config) - if err != nil { - return nil, err - } else { - transformed["tcpSocket"] = transformedTcpSocket - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeInitialDelaySeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeTimeoutSeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbePeriodSeconds(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeFailureThreshold(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersPortsName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGet(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 { - return nil, nil - } - - if l[0] == nil { - transformed := make(map[string]interface{}) - return transformed, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPath, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetPath(original["path"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPath); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["path"] = transformedPath - } - - transformedHttpHeaders, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeaders(original["http_headers"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedHttpHeaders); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["httpHeaders"] = transformedHttpHeaders - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetPath(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersPortsContainerPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeaders(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersVolumeMounts(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) for _, raw := range l { @@ -3148,18 +2379,18 @@ func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeaders original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - transformedName, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersName(original["name"], d, config) + transformedName, err := expandCloudRunV2JobTemplateTemplateContainersVolumeMountsName(original["name"], d, config) if err != nil { return nil, err } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { transformed["name"] = transformedName } - transformedValue, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersValue(original["value"], d, config) + transformedMountPath, err := expandCloudRunV2JobTemplateTemplateContainersVolumeMountsMountPath(original["mount_path"], d, config) if err != nil { return nil, err - } else { - transformed["value"] = transformedValue + } else if val := reflect.ValueOf(transformedMountPath); val.IsValid() && !tpgresource.IsEmptyValue(val) { + transformed["mountPath"] = transformedMountPath } req = append(req, transformed) @@ -3167,39 +2398,15 @@ func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeaders return req, nil } -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersVolumeMountsName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeHttpGetHttpHeadersValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersVolumeMountsMountPath(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocket(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 { - return nil, nil - } - - if l[0] == nil { - transformed := make(map[string]interface{}) - return transformed, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPort, err := expandCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocketPort(original["port"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPort); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["port"] = transformedPort - } - - return transformed, nil -} - -func expandCloudRunV2JobTemplateTemplateContainersStartupProbeTcpSocketPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { +func expandCloudRunV2JobTemplateTemplateContainersWorkingDir(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -3463,3 +2670,25 @@ func expandCloudRunV2JobTemplateTemplateVpcAccessNetworkInterfacesTags(v interfa func expandCloudRunV2JobTemplateTemplateMaxRetries(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandCloudRunV2JobEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + +func expandCloudRunV2JobEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/cloudrunv2/resource_cloud_run_v2_job_generated_test.go b/google/services/cloudrunv2/resource_cloud_run_v2_job_generated_test.go index 9a331274516..ecae34afa1d 100644 --- a/google/services/cloudrunv2/resource_cloud_run_v2_job_generated_test.go +++ b/google/services/cloudrunv2/resource_cloud_run_v2_job_generated_test.go @@ -49,7 +49,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobBasicExample(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -98,7 +98,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobSqlExample(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -204,7 +204,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobVpcaccessExample(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -277,7 +277,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobDirectvpcExample(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -333,7 +333,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobSecretExample(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) diff --git a/google/services/cloudrunv2/resource_cloud_run_v2_job_test.go b/google/services/cloudrunv2/resource_cloud_run_v2_job_test.go index ed406a25332..0e448803fd2 100644 --- a/google/services/cloudrunv2/resource_cloud_run_v2_job_test.go +++ b/google/services/cloudrunv2/resource_cloud_run_v2_job_test.go @@ -29,7 +29,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobFullUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "launch_stage"}, + ImportStateVerifyIgnore: []string{"location", "launch_stage", "labels", "terraform_labels", "annotations"}, }, { Config: testAccCloudRunV2Job_cloudrunv2JobFullUpdate(context), @@ -38,7 +38,7 @@ func TestAccCloudRunV2Job_cloudrunv2JobFullUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_job.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "launch_stage"}, + ImportStateVerifyIgnore: []string{"location", "launch_stage", "labels", "terraform_labels", "annotations"}, }, }, }) diff --git a/google/services/cloudrunv2/resource_cloud_run_v2_service.go b/google/services/cloudrunv2/resource_cloud_run_v2_service.go index 5f5136cc6d7..b6e0a31de5d 100644 --- a/google/services/cloudrunv2/resource_cloud_run_v2_service.go +++ b/google/services/cloudrunv2/resource_cloud_run_v2_service.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,12 @@ func ResourceCloudRunV2Service() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.SetAnnotationsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -242,22 +249,6 @@ If not specified, defaults to the same value as container.ports[0].containerPort Description: `How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. Must be greater or equal than timeoutSeconds`, Default: 10, }, - "tcp_socket": { - Type: schema.TypeList, - Optional: true, - Deprecated: "`tcp_socket` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API.", - Description: `TCPSocket specifies an action involving a TCP port. This field is not supported in liveness probe currently.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "port": { - Type: schema.TypeInt, - Optional: true, - Description: `Port number to access on the container. Must be in the range 1 to 65535. If not specified, defaults to 8080.`, - }, - }, - }, - }, "timeout_seconds": { Type: schema.TypeInt, Optional: true, @@ -560,12 +551,13 @@ A duration in seconds with up to nine fractional digits, ending with 's'. Exampl Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "instances": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `The Cloud SQL instance connection names, as can be found in https://console.cloud.google.com/sql/instances. Visit https://cloud.google.com/sql/docs/mysql/connect-run for more information on how to connect Cloud SQL and Cloud Run. Format: {project}:{location}:{instance}`, Elem: &schema.Schema{ Type: schema.TypeString, }, + Set: schema.HashString, }, }, }, @@ -735,7 +727,10 @@ This field follows Kubernetes annotations' namespacing, limits, and rules.`, environment, state, etc. For more information, visit https://cloud.google.com/resource-manager/docs/creating-managing-labels or https://cloud.google.com/run/docs/configuring/labels. Cloud Run API v2 does not support labels with 'run.googleapis.com', 'cloud.googleapis.com', 'serving.knative.dev', or 'autoscaling.knative.dev' namespaces, and they will be rejected. -All system labels in v1 now have a corresponding field in v2 Service.`, +All system labels in v1 now have a corresponding field in v2 Service. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "launch_stage": { @@ -852,6 +847,18 @@ A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to n Computed: true, Description: `The deletion time.`, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -947,6 +954,13 @@ If reconciliation failed, trafficStatuses, observedGeneration, and latestReadyRe }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "traffic_statuses": { Type: schema.TypeList, Computed: true, @@ -1021,18 +1035,6 @@ func resourceCloudRunV2ServiceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCloudRunV2ServiceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandCloudRunV2ServiceAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } clientProp, err := expandCloudRunV2ServiceClient(d.Get("client"), d, config) if err != nil { return err @@ -1075,6 +1077,18 @@ func resourceCloudRunV2ServiceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("traffic"); !tpgresource.IsEmptyValue(reflect.ValueOf(trafficProp)) && (ok || !reflect.DeepEqual(v, trafficProp)) { obj["traffic"] = trafficProp } + labelsProp, err := expandCloudRunV2ServiceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandCloudRunV2ServiceEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CloudRunV2BasePath}}projects/{{project}}/locations/{{location}}/services?serviceId={{name}}") if err != nil { @@ -1261,6 +1275,15 @@ func resourceCloudRunV2ServiceRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("etag", flattenCloudRunV2ServiceEtag(res["etag"], d, config)); err != nil { return fmt.Errorf("Error reading Service: %s", err) } + if err := d.Set("terraform_labels", flattenCloudRunV2ServiceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Service: %s", err) + } + if err := d.Set("effective_labels", flattenCloudRunV2ServiceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Service: %s", err) + } + if err := d.Set("effective_annotations", flattenCloudRunV2ServiceEffectiveAnnotations(res["annotations"], d, config)); err != nil { + return fmt.Errorf("Error reading Service: %s", err) + } return nil } @@ -1287,18 +1310,6 @@ func resourceCloudRunV2ServiceUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandCloudRunV2ServiceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandCloudRunV2ServiceAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } clientProp, err := expandCloudRunV2ServiceClient(d.Get("client"), d, config) if err != nil { return err @@ -1341,6 +1352,18 @@ func resourceCloudRunV2ServiceUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("traffic"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, trafficProp)) { obj["traffic"] = trafficProp } + labelsProp, err := expandCloudRunV2ServiceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandCloudRunV2ServiceEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{CloudRunV2BasePath}}projects/{{project}}/locations/{{location}}/services/{{name}}") if err != nil { @@ -1437,9 +1460,9 @@ func resourceCloudRunV2ServiceDelete(d *schema.ResourceData, meta interface{}) e func resourceCloudRunV2ServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/services/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/services/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1467,11 +1490,33 @@ func flattenCloudRunV2ServiceGeneration(v interface{}, d *schema.ResourceData, c } func flattenCloudRunV2ServiceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudRunV2ServiceAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenCloudRunV2ServiceCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1927,8 +1972,6 @@ func flattenCloudRunV2ServiceTemplateContainersLivenessProbe(v interface{}, d *s flattenCloudRunV2ServiceTemplateContainersLivenessProbeFailureThreshold(original["failureThreshold"], d, config) transformed["http_get"] = flattenCloudRunV2ServiceTemplateContainersLivenessProbeHttpGet(original["httpGet"], d, config) - transformed["tcp_socket"] = - flattenCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocket(original["tcpSocket"], d, config) transformed["grpc"] = flattenCloudRunV2ServiceTemplateContainersLivenessProbeGrpc(original["grpc"], d, config) return []interface{}{transformed} @@ -2063,33 +2106,6 @@ func flattenCloudRunV2ServiceTemplateContainersLivenessProbeHttpGetHttpHeadersVa return v } -func flattenCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocket(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - transformed := make(map[string]interface{}) - transformed["port"] = - flattenCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocketPort(original["port"], d, config) - return []interface{}{transformed} -} -func flattenCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocketPort(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - func flattenCloudRunV2ServiceTemplateContainersLivenessProbeGrpc(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -2459,7 +2475,10 @@ func flattenCloudRunV2ServiceTemplateVolumesCloudSqlInstance(v interface{}, d *s return []interface{}{transformed} } func flattenCloudRunV2ServiceTemplateVolumesCloudSqlInstanceInstances(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + return schema.NewSet(schema.HashString, v.([]interface{})) } func flattenCloudRunV2ServiceTemplateExecutionEnvironment(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -2736,30 +2755,31 @@ func flattenCloudRunV2ServiceEtag(v interface{}, d *schema.ResourceData, config return v } -func expandCloudRunV2ServiceDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandCloudRunV2ServiceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenCloudRunV2ServiceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed } -func expandCloudRunV2ServiceAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil +func flattenCloudRunV2ServiceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenCloudRunV2ServiceEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandCloudRunV2ServiceDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandCloudRunV2ServiceClient(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -3451,13 +3471,6 @@ func expandCloudRunV2ServiceTemplateContainersLivenessProbe(v interface{}, d tpg transformed["httpGet"] = transformedHttpGet } - transformedTcpSocket, err := expandCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocket(original["tcp_socket"], d, config) - if err != nil { - return nil, err - } else { - transformed["tcpSocket"] = transformedTcpSocket - } - transformedGrpc, err := expandCloudRunV2ServiceTemplateContainersLivenessProbeGrpc(original["grpc"], d, config) if err != nil { return nil, err @@ -3567,34 +3580,6 @@ func expandCloudRunV2ServiceTemplateContainersLivenessProbeHttpGetHttpHeadersVal return v, nil } -func expandCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocket(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 { - return nil, nil - } - - if l[0] == nil { - transformed := make(map[string]interface{}) - return transformed, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedPort, err := expandCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocketPort(original["port"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedPort); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["port"] = transformedPort - } - - return transformed, nil -} - -func expandCloudRunV2ServiceTemplateContainersLivenessProbeTcpSocketPort(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - func expandCloudRunV2ServiceTemplateContainersLivenessProbeGrpc(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 { @@ -4010,6 +3995,7 @@ func expandCloudRunV2ServiceTemplateVolumesCloudSqlInstance(v interface{}, d tpg } func expandCloudRunV2ServiceTemplateVolumesCloudSqlInstanceInstances(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() return v, nil } @@ -4087,3 +4073,25 @@ func expandCloudRunV2ServiceTrafficPercent(v interface{}, d tpgresource.Terrafor func expandCloudRunV2ServiceTrafficTag(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandCloudRunV2ServiceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + +func expandCloudRunV2ServiceEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/cloudrunv2/resource_cloud_run_v2_service_generated_test.go b/google/services/cloudrunv2/resource_cloud_run_v2_service_generated_test.go index abc92e6add3..bde6e836783 100644 --- a/google/services/cloudrunv2/resource_cloud_run_v2_service_generated_test.go +++ b/google/services/cloudrunv2/resource_cloud_run_v2_service_generated_test.go @@ -49,7 +49,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceBasicExample(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -91,7 +91,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceSqlExample(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -200,7 +200,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceVpcaccessExample(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -265,7 +265,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceDirectvpcExample(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -313,7 +313,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceProbesExample(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -367,7 +367,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceSecretExample(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "annotations", "terraform_labels"}, }, }, }) diff --git a/google/services/cloudrunv2/resource_cloud_run_v2_service_test.go b/google/services/cloudrunv2/resource_cloud_run_v2_service_test.go index 63bb6ea5fa8..4bd34c7fd7d 100644 --- a/google/services/cloudrunv2/resource_cloud_run_v2_service_test.go +++ b/google/services/cloudrunv2/resource_cloud_run_v2_service_test.go @@ -32,7 +32,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceFullUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations", "labels", "terraform_labels"}, }, { Config: testAccCloudRunV2Service_cloudrunv2ServiceFullUpdate(context), @@ -41,7 +41,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceFullUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations", "labels", "terraform_labels"}, }, }, }) @@ -229,7 +229,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceTCPProbesUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations"}, }, { Config: testAccCloudRunV2Service_cloudrunv2ServiceUpdateWithTCPStartupProbeAndHTTPLivenessProbe(context), @@ -238,7 +238,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceTCPProbesUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations"}, }, }, }) @@ -263,7 +263,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceHTTPProbesUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations"}, }, { Config: testAccCloudRunV2Service_cloudrunv2ServiceUpdateWithHTTPStartupProbe(context), @@ -272,7 +272,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceHTTPProbesUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations"}, }, }, }) @@ -298,7 +298,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceGRPCProbesUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations"}, }, { Config: testAccCloudRunV2Service_cloudRunServiceUpdateWithGRPCLivenessProbe(context), @@ -307,7 +307,7 @@ func TestAccCloudRunV2Service_cloudrunv2ServiceGRPCProbesUpdate(t *testing.T) { ResourceName: "google_cloud_run_v2_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "annotations"}, }, // The following test steps of gRPC startup probe are expected to fail with startup probe check failures. // This is because, due to the unavailability of ready-to-use container images of a gRPC service that diff --git a/google/services/cloudscheduler/resource_cloud_scheduler_job.go b/google/services/cloudscheduler/resource_cloud_scheduler_job.go index 25a7978a16a..daa960a198d 100644 --- a/google/services/cloudscheduler/resource_cloud_scheduler_job.go +++ b/google/services/cloudscheduler/resource_cloud_scheduler_job.go @@ -121,6 +121,8 @@ func ResourceCloudSchedulerJob() *schema.Resource { CustomizeDiff: customdiff.All( validateAuthHeaders, + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, ), Schema: map[string]*schema.Schema{ @@ -872,10 +874,10 @@ func resourceCloudSchedulerJobDelete(d *schema.ResourceData, meta interface{}) e func resourceCloudSchedulerJobImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/jobs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/jobs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/cloudtasks/resource_cloud_tasks_queue.go b/google/services/cloudtasks/resource_cloud_tasks_queue.go index 2c8a70d6bd3..eb0581221c7 100644 --- a/google/services/cloudtasks/resource_cloud_tasks_queue.go +++ b/google/services/cloudtasks/resource_cloud_tasks_queue.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -55,6 +56,10 @@ func ResourceCloudTasksQueue() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -531,9 +536,9 @@ func resourceCloudTasksQueueDelete(d *schema.ResourceData, meta interface{}) err func resourceCloudTasksQueueImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/queues/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/queues/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/composer/data_source_google_composer_environment.go b/google/services/composer/data_source_google_composer_environment.go index 755b73cf8b2..92c37b9ea57 100644 --- a/google/services/composer/data_source_google_composer_environment.go +++ b/google/services/composer/data_source_google_composer_environment.go @@ -37,7 +37,20 @@ func dataSourceGoogleComposerEnvironmentRead(d *schema.ResourceData, meta interf } envName := d.Get("name").(string) - d.SetId(fmt.Sprintf("projects/%s/locations/%s/environments/%s", project, region, envName)) + id := fmt.Sprintf("projects/%s/locations/%s/environments/%s", project, region, envName) + d.SetId(id) + err = resourceComposerEnvironmentRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } - return resourceComposerEnvironmentRead(d, meta) + return nil } diff --git a/google/services/composer/data_source_google_composer_environment_test.go b/google/services/composer/data_source_google_composer_environment_test.go index 94644554067..eff18ea0890 100644 --- a/google/services/composer/data_source_google_composer_environment_test.go +++ b/google/services/composer/data_source_google_composer_environment_test.go @@ -27,6 +27,8 @@ func TestAccDataSourceComposerEnvironment_basic(t *testing.T) { { Config: testAccDataSourceComposerEnvironment_basic(context), Check: resource.ComposeTestCheckFunc( + acctest.CheckDataSourceStateMatchesResourceState("data.google_composer_environment.test", + "google_composer_environment.test"), testAccCheckGoogleComposerEnvironmentMeta("data.google_composer_environment.test"), ), }, @@ -93,6 +95,9 @@ resource "google_composer_environment" "test" { image_version = "composer-1-airflow-2" } } + labels = { + my-label = "my-label-value" + } } // use a separate network to avoid conflicts with other tests running in parallel diff --git a/google/services/composer/resource_composer_environment.go b/google/services/composer/resource_composer_environment.go index 428be013ba0..5e6020b89fc 100644 --- a/google/services/composer/resource_composer_environment.go +++ b/google/services/composer/resource_composer_environment.go @@ -10,6 +10,7 @@ import ( "time" "github.com/hashicorp/go-version" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -147,6 +148,12 @@ func ResourceComposerEnvironment() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -800,10 +807,27 @@ func ResourceComposerEnvironment() *schema.Resource { }, }, "labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `User-defined labels for this environment. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?. Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?. No more than 64 labels can be associated with a given environment. Both keys and values must be <= 128 bytes in size. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, - Description: `User-defined labels for this environment. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: Label keys must be between 1 and 63 characters long and must conform to the following regular expression: [a-z]([-a-z0-9]*[a-z0-9])?. Label values must be between 0 and 63 characters long and must conform to the regular expression ([a-z]([-a-z0-9]*[a-z0-9])?)?. No more than 64 labels can be associated with a given environment. Both keys and values must be <= 128 bytes in size.`, }, }, UseJSONNumber: true, @@ -829,7 +853,7 @@ func resourceComposerEnvironmentCreate(d *schema.ResourceData, meta interface{}) env := &composer.Environment{ Name: envName.ResourceName(), - Labels: tpgresource.ExpandLabels(d), + Labels: tpgresource.ExpandEffectiveLabels(d), Config: transformedConfig, } @@ -907,8 +931,14 @@ func resourceComposerEnvironmentRead(d *schema.ResourceData, meta interface{}) e if err := d.Set("config", flattenComposerEnvironmentConfig(res.Config)); err != nil { return fmt.Errorf("Error setting Environment: %s", err) } - if err := d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("Error setting Environment: %s", err) + if err := tpgresource.SetLabels(res.Labels, d, "labels"); err != nil { + return fmt.Errorf("Error setting Environment labels: %s", err) + } + if err := tpgresource.SetLabels(res.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("Error setting Environment effective_labels: %s", err) } return nil } @@ -1117,8 +1147,8 @@ func resourceComposerEnvironmentUpdate(d *schema.ResourceData, meta interface{}) } } - if d.HasChange("labels") { - patchEnv := &composer.Environment{Labels: tpgresource.ExpandLabels(d)} + if d.HasChange("effective_labels") { + patchEnv := &composer.Environment{Labels: tpgresource.ExpandEffectiveLabels(d)} err := resourceComposerEnvironmentPatchField("labels", userAgent, patchEnv, d, tfConfig) if err != nil { return err diff --git a/google/services/composer/resource_composer_environment_test.go b/google/services/composer/resource_composer_environment_test.go index ecefaaf1092..eadccde9fd2 100644 --- a/google/services/composer/resource_composer_environment_test.go +++ b/google/services/composer/resource_composer_environment_test.go @@ -98,9 +98,10 @@ func TestAccComposerEnvironment_update(t *testing.T) { Config: testAccComposerEnvironment_update(envName, network, subnetwork), }, { - ResourceName: "google_composer_environment.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_composer_environment.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, // This is a terrible clean-up step in order to get destroy to succeed, // due to dangling firewall rules left by the Composer Environment blocking network deletion. diff --git a/google/services/compute/data_source_compute_health_check.go b/google/services/compute/data_source_compute_health_check.go index 5b9347e4cff..17758d7a188 100644 --- a/google/services/compute/data_source_compute_health_check.go +++ b/google/services/compute/data_source_compute_health_check.go @@ -3,6 +3,8 @@ package compute import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -31,5 +33,14 @@ func dataSourceGoogleComputeHealthCheckRead(d *schema.ResourceData, meta interfa } d.SetId(id) - return resourceComputeHealthCheckRead(d, meta) + err = resourceComputeHealthCheckRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_compute_network_endpoint_group.go b/google/services/compute/data_source_compute_network_endpoint_group.go index 181a75af574..3c6a237f333 100644 --- a/google/services/compute/data_source_compute_network_endpoint_group.go +++ b/google/services/compute/data_source_compute_network_endpoint_group.go @@ -29,6 +29,7 @@ func DataSourceGoogleComputeNetworkEndpointGroup() *schema.Resource { func dataSourceComputeNetworkEndpointGroupRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*transport_tpg.Config) + id := "" if name, ok := d.GetOk("name"); ok { project, err := tpgresource.GetProject(d, config) if err != nil { @@ -38,7 +39,8 @@ func dataSourceComputeNetworkEndpointGroupRead(d *schema.ResourceData, meta inte if err != nil { return err } - d.SetId(fmt.Sprintf("projects/%s/zones/%s/networkEndpointGroups/%s", project, zone, name.(string))) + id = fmt.Sprintf("projects/%s/zones/%s/networkEndpointGroups/%s", project, zone, name.(string)) + d.SetId(id) } else if selfLink, ok := d.GetOk("self_link"); ok { parsed, err := tpgresource.ParseNetworkEndpointGroupFieldValue(selfLink.(string), d, config) if err != nil { @@ -53,10 +55,20 @@ func dataSourceComputeNetworkEndpointGroupRead(d *schema.ResourceData, meta inte if err := d.Set("project", parsed.Project); err != nil { return fmt.Errorf("Error setting project: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/zones/%s/networkEndpointGroups/%s", parsed.Project, parsed.Zone, parsed.Name)) + id = fmt.Sprintf("projects/%s/zones/%s/networkEndpointGroups/%s", parsed.Project, parsed.Zone, parsed.Name) + d.SetId(id) } else { return errors.New("Must provide either `self_link` or `zone/name`") } - return resourceComputeNetworkEndpointGroupRead(d, meta) + err := resourceComputeNetworkEndpointGroupRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_compute_network_peering.go b/google/services/compute/data_source_compute_network_peering.go index 174c5e04eb4..f0c7e0c4ffa 100644 --- a/google/services/compute/data_source_compute_network_peering.go +++ b/google/services/compute/data_source_compute_network_peering.go @@ -37,7 +37,17 @@ func dataSourceComputeNetworkPeeringRead(d *schema.ResourceData, meta interface{ if err != nil { return err } - d.SetId(fmt.Sprintf("%s/%s", networkFieldValue.Name, d.Get("name").(string))) + id := fmt.Sprintf("%s/%s", networkFieldValue.Name, d.Get("name").(string)) + d.SetId(id) - return resourceComputeNetworkPeeringRead(d, meta) + err = resourceComputeNetworkPeeringRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_address.go b/google/services/compute/data_source_google_compute_address.go index 6c36d08e5f8..c52650b804b 100644 --- a/google/services/compute/data_source_google_compute_address.go +++ b/google/services/compute/data_source_google_compute_address.go @@ -110,9 +110,11 @@ func dataSourceGoogleComputeAddressRead(d *schema.ResourceData, meta interface{} } name := d.Get("name").(string) + id := fmt.Sprintf("projects/%s/regions/%s/addresses/%s", project, region, name) + address, err := config.NewComputeClient(userAgent).Addresses.Get(project, region, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Address Not Found : %s", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Address Not Found : %s", name), id) } if err := d.Set("address", address.Address); err != nil { @@ -149,7 +151,7 @@ func dataSourceGoogleComputeAddressRead(d *schema.ResourceData, meta interface{} return fmt.Errorf("Error setting region: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/addresses/%s", project, region, name)) + d.SetId(id) return nil } diff --git a/google/services/compute/data_source_google_compute_backend_bucket.go b/google/services/compute/data_source_google_compute_backend_bucket.go index 2c0acbb4b60..27a2a1ef9e6 100644 --- a/google/services/compute/data_source_google_compute_backend_bucket.go +++ b/google/services/compute/data_source_google_compute_backend_bucket.go @@ -35,7 +35,17 @@ func dataSourceComputeBackendBucketRead(d *schema.ResourceData, meta interface{} return err } - d.SetId(fmt.Sprintf("projects/%s/global/backendBuckets/%s", project, backendBucketName)) + id := fmt.Sprintf("projects/%s/global/backendBuckets/%s", project, backendBucketName) + d.SetId(id) - return resourceComputeBackendBucketRead(d, meta) + err = resourceComputeBackendBucketRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_backend_service.go b/google/services/compute/data_source_google_compute_backend_service.go index a906e270ea6..ea2e9946c06 100644 --- a/google/services/compute/data_source_google_compute_backend_service.go +++ b/google/services/compute/data_source_google_compute_backend_service.go @@ -35,7 +35,17 @@ func dataSourceComputeBackendServiceRead(d *schema.ResourceData, meta interface{ return err } - d.SetId(fmt.Sprintf("projects/%s/global/backendServices/%s", project, serviceName)) + id := fmt.Sprintf("projects/%s/global/backendServices/%s", project, serviceName) + d.SetId(id) - return resourceComputeBackendServiceRead(d, meta) + err = resourceComputeBackendServiceRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_default_service_account.go b/google/services/compute/data_source_google_compute_default_service_account.go index 077c67ee762..f26671fbb15 100644 --- a/google/services/compute/data_source_google_compute_default_service_account.go +++ b/google/services/compute/data_source_google_compute_default_service_account.go @@ -57,7 +57,7 @@ func dataSourceGoogleComputeDefaultServiceAccountRead(d *schema.ResourceData, me projectCompResource, err := config.NewComputeClient(userAgent).Projects.Get(project).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "GCE default service account") + return transport_tpg.HandleDataSourceNotFoundError(err, d, "GCE default service account", fmt.Sprintf("%q GCE default service account", project)) } serviceAccountName, err := tpgresource.ServiceAccountFQN(projectCompResource.DefaultServiceAccount, d, config) @@ -67,7 +67,7 @@ func dataSourceGoogleComputeDefaultServiceAccountRead(d *schema.ResourceData, me sa, err := config.NewIamClient(userAgent).Projects.ServiceAccounts.Get(serviceAccountName).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Service Account %q", serviceAccountName)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Service Account %q", serviceAccountName), serviceAccountName) } d.SetId(sa.Name) diff --git a/google/services/compute/data_source_google_compute_disk.go b/google/services/compute/data_source_google_compute_disk.go index a2d03cf26df..e0270524949 100644 --- a/google/services/compute/data_source_google_compute_disk.go +++ b/google/services/compute/data_source_google_compute_disk.go @@ -31,5 +31,18 @@ func dataSourceGoogleComputeDiskRead(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceComputeDiskRead(d, meta) + err = resourceComputeDiskRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_disk_test.go b/google/services/compute/data_source_google_compute_disk_test.go index 8bb7888dfcc..979c5a742f8 100644 --- a/google/services/compute/data_source_google_compute_disk_test.go +++ b/google/services/compute/data_source_google_compute_disk_test.go @@ -34,7 +34,10 @@ func TestAccDataSourceGoogleComputeDisk_basic(t *testing.T) { func testAccDataSourceGoogleComputeDisk_basic(context map[string]interface{}) string { return acctest.Nprintf(` resource "google_compute_disk" "foo" { - name = "tf-test-compute-disk-%{random_suffix}" + name = "tf-test-compute-disk-%{random_suffix}" + labels = { + my-label = "my-label-value" + } } data "google_compute_disk" "foo" { diff --git a/google/services/compute/data_source_google_compute_forwarding_rule.go b/google/services/compute/data_source_google_compute_forwarding_rule.go index f8d546310a4..67774619ef1 100644 --- a/google/services/compute/data_source_google_compute_forwarding_rule.go +++ b/google/services/compute/data_source_google_compute_forwarding_rule.go @@ -41,7 +41,21 @@ func dataSourceGoogleComputeForwardingRuleRead(d *schema.ResourceData, meta inte return err } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/forwardingRules/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/regions/%s/forwardingRules/%s", project, region, name) + d.SetId(id) - return resourceComputeForwardingRuleRead(d, meta) + err = resourceComputeForwardingRuleRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_forwarding_rule_test.go b/google/services/compute/data_source_google_compute_forwarding_rule_test.go index e76c60c2b22..5512f3ae126 100644 --- a/google/services/compute/data_source_google_compute_forwarding_rule_test.go +++ b/google/services/compute/data_source_google_compute_forwarding_rule_test.go @@ -42,6 +42,9 @@ resource "google_compute_forwarding_rule" "foobar-fr" { name = "%s" port_range = "80-81" target = google_compute_target_pool.foobar-tp.self_link + labels = { + my-label = "my-label-value" + } } data "google_compute_forwarding_rule" "my_forwarding_rule" { diff --git a/google/services/compute/data_source_google_compute_global_address.go b/google/services/compute/data_source_google_compute_global_address.go index 9762f076e90..1d9b42c43bd 100644 --- a/google/services/compute/data_source_google_compute_global_address.go +++ b/google/services/compute/data_source_google_compute_global_address.go @@ -91,9 +91,11 @@ func dataSourceGoogleComputeGlobalAddressRead(d *schema.ResourceData, meta inter return err } name := d.Get("name").(string) + id := fmt.Sprintf("projects/%s/global/addresses/%s", project, name) + address, err := config.NewComputeClient(userAgent).GlobalAddresses.Get(project, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Global Address Not Found : %s", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Global Address Not Found : %s", name), id) } if err := d.Set("address", address.Address); err != nil { @@ -126,6 +128,6 @@ func dataSourceGoogleComputeGlobalAddressRead(d *schema.ResourceData, meta inter if err := d.Set("project", project); err != nil { return fmt.Errorf("Error setting project: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/global/addresses/%s", project, name)) + d.SetId(id) return nil } diff --git a/google/services/compute/data_source_google_compute_ha_vpn_gateway.go b/google/services/compute/data_source_google_compute_ha_vpn_gateway.go index bfa88cee79a..01f80e88137 100644 --- a/google/services/compute/data_source_google_compute_ha_vpn_gateway.go +++ b/google/services/compute/data_source_google_compute_ha_vpn_gateway.go @@ -41,7 +41,17 @@ func dataSourceGoogleComputeHaVpnGatewayRead(d *schema.ResourceData, meta interf return err } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/vpnGateways/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/regions/%s/vpnGateways/%s", project, region, name) + d.SetId(id) - return resourceComputeHaVpnGatewayRead(d, meta) + err = resourceComputeHaVpnGatewayRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_instance.go b/google/services/compute/data_source_google_compute_instance.go index f50208fbb00..acab1b38540 100644 --- a/google/services/compute/data_source_google_compute_instance.go +++ b/google/services/compute/data_source_google_compute_instance.go @@ -35,9 +35,11 @@ func dataSourceGoogleComputeInstanceRead(d *schema.ResourceData, meta interface{ return err } + id := fmt.Sprintf("projects/%s/zones/%s/instances/%s", project, zone, name) + instance, err := config.NewComputeClient(userAgent).Instances.Get(project, zone, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Instance %s", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Instance %s", name), id) } md := flattenMetadataBeta(instance.Metadata) @@ -97,6 +99,10 @@ func dataSourceGoogleComputeInstanceRead(d *schema.ResourceData, meta interface{ return err } + if err := d.Set("terraform_labels", instance.Labels); err != nil { + return err + } + if instance.LabelFingerprint != "" { if err := d.Set("label_fingerprint", instance.LabelFingerprint); err != nil { return fmt.Errorf("Error setting label_fingerprint: %s", err) diff --git a/google/services/compute/data_source_google_compute_instance_group.go b/google/services/compute/data_source_google_compute_instance_group.go index 8dddeb27fcd..bc3fd4a6f78 100644 --- a/google/services/compute/data_source_google_compute_instance_group.go +++ b/google/services/compute/data_source_google_compute_instance_group.go @@ -86,6 +86,7 @@ func DataSourceGoogleComputeInstanceGroup() *schema.Resource { func dataSourceComputeInstanceGroupRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*transport_tpg.Config) + id := "" if name, ok := d.GetOk("name"); ok { zone, err := tpgresource.GetZone(d, config) if err != nil { @@ -95,7 +96,7 @@ func dataSourceComputeInstanceGroupRead(d *schema.ResourceData, meta interface{} if err != nil { return err } - d.SetId(fmt.Sprintf("projects/%s/zones/%s/instanceGroups/%s", project, zone, name.(string))) + id = fmt.Sprintf("projects/%s/zones/%s/instanceGroups/%s", project, zone, name.(string)) } else if selfLink, ok := d.GetOk("self_link"); ok { parsed, err := tpgresource.ParseInstanceGroupFieldValue(selfLink.(string), d, config) if err != nil { @@ -110,10 +111,20 @@ func dataSourceComputeInstanceGroupRead(d *schema.ResourceData, meta interface{} if err := d.Set("project", parsed.Project); err != nil { return fmt.Errorf("Error setting project: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/zones/%s/instanceGroups/%s", parsed.Project, parsed.Zone, parsed.Name)) + id = fmt.Sprintf("projects/%s/zones/%s/instanceGroups/%s", parsed.Project, parsed.Zone, parsed.Name) } else { return errors.New("Must provide either `self_link` or `zone/name`") } + d.SetId(id) - return resourceComputeInstanceGroupRead(d, meta) + err := resourceComputeInstanceGroupRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_instance_template.go b/google/services/compute/data_source_google_compute_instance_template.go index 5370c964842..762ecd6a68d 100644 --- a/google/services/compute/data_source_google_compute_instance_template.go +++ b/google/services/compute/data_source_google_compute_instance_template.go @@ -89,7 +89,10 @@ func datasourceComputeInstanceTemplateRead(d *schema.ResourceData, meta interfac func retrieveInstance(d *schema.ResourceData, meta interface{}, project, name string) error { d.SetId("projects/" + project + "/global/instanceTemplates/" + name) - return resourceComputeInstanceTemplateRead(d, meta) + if err := resourceComputeInstanceTemplateRead(d, meta); err != nil { + return err + } + return tpgresource.SetDataSourceLabels(d) } func retrieveInstanceFromUniqueId(d *schema.ResourceData, meta interface{}, project, self_link_unique string) error { @@ -97,7 +100,10 @@ func retrieveInstanceFromUniqueId(d *schema.ResourceData, meta interface{}, proj d.SetId(normalId) d.Set("self_link_unique", self_link_unique) - return resourceComputeInstanceTemplateRead(d, meta) + if err := resourceComputeInstanceTemplateRead(d, meta); err != nil { + return err + } + return tpgresource.SetDataSourceLabels(d) } // ByCreationTimestamp implements sort.Interface for []*InstanceTemplate based on diff --git a/google/services/compute/data_source_google_compute_instance_template_test.go b/google/services/compute/data_source_google_compute_instance_template_test.go index 7d91dc5176d..0f16f8079cb 100644 --- a/google/services/compute/data_source_google_compute_instance_template_test.go +++ b/google/services/compute/data_source_google_compute_instance_template_test.go @@ -117,6 +117,9 @@ resource "google_compute_instance_template" "default" { network_interface { network = "default" } + labels = { + my-label = "my-label-value" + } } data "google_compute_instance_template" "default" { diff --git a/google/services/compute/data_source_google_compute_network.go b/google/services/compute/data_source_google_compute_network.go index bc317f9d06f..df05602056b 100644 --- a/google/services/compute/data_source_google_compute_network.go +++ b/google/services/compute/data_source_google_compute_network.go @@ -61,9 +61,12 @@ func dataSourceGoogleComputeNetworkRead(d *schema.ResourceData, meta interface{} return err } name := d.Get("name").(string) + + id := fmt.Sprintf("projects/%s/global/networks/%s", project, name) + network, err := config.NewComputeClient(userAgent).Networks.Get(project, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Network Not Found : %s", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Network Not Found : %s", name), id) } if err := d.Set("gateway_ipv4", network.GatewayIPv4); err != nil { return fmt.Errorf("Error setting gateway_ipv4: %s", err) @@ -77,6 +80,6 @@ func dataSourceGoogleComputeNetworkRead(d *schema.ResourceData, meta interface{} if err := d.Set("subnetworks_self_links", network.Subnetworks); err != nil { return fmt.Errorf("Error setting subnetworks_self_links: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/global/networks/%s", project, network.Name)) + d.SetId(id) return nil } diff --git a/google/services/compute/data_source_google_compute_region_instance_group.go b/google/services/compute/data_source_google_compute_region_instance_group.go index e957d114f87..a7f824bee63 100644 --- a/google/services/compute/data_source_google_compute_region_instance_group.go +++ b/google/services/compute/data_source_google_compute_region_instance_group.go @@ -101,11 +101,11 @@ func dataSourceComputeRegionInstanceGroupRead(d *schema.ResourceData, meta inter if err != nil { return err } - + id := fmt.Sprintf("projects/%s/regions/%s/instanceGroups/%s", project, region, name) instanceGroup, err := config.NewComputeClient(userAgent).RegionInstanceGroups.Get( project, region, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Region Instance Group %q", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Region Instance Group %q", name), id) } members, err := config.NewComputeClient(userAgent).RegionInstanceGroups.ListInstances( @@ -126,7 +126,7 @@ func dataSourceComputeRegionInstanceGroupRead(d *schema.ResourceData, meta inter return fmt.Errorf("Error setting instances: %s", err) } } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/instanceGroups/%s", project, region, name)) + d.SetId(id) if err := d.Set("self_link", instanceGroup.SelfLink); err != nil { return fmt.Errorf("Error setting self_link: %s", err) } diff --git a/google/services/compute/data_source_google_compute_region_instance_template.go b/google/services/compute/data_source_google_compute_region_instance_template.go index cce92006754..149532db8e5 100644 --- a/google/services/compute/data_source_google_compute_region_instance_template.go +++ b/google/services/compute/data_source_google_compute_region_instance_template.go @@ -121,5 +121,8 @@ func datasourceComputeRegionInstanceTemplateRead(d *schema.ResourceData, meta in func retrieveInstances(d *schema.ResourceData, meta interface{}, project, region, name string) error { d.SetId("projects/" + project + "/regions/" + region + "/instanceTemplates/" + name) - return resourceComputeRegionInstanceTemplateRead(d, meta) + if err := resourceComputeRegionInstanceTemplateRead(d, meta); err != nil { + return err + } + return tpgresource.SetDataSourceLabels(d) } diff --git a/google/services/compute/data_source_google_compute_region_network_endpoint_group.go b/google/services/compute/data_source_google_compute_region_network_endpoint_group.go index ef7597b2e1f..0cae9ad6156 100644 --- a/google/services/compute/data_source_google_compute_region_network_endpoint_group.go +++ b/google/services/compute/data_source_google_compute_region_network_endpoint_group.go @@ -28,6 +28,7 @@ func DataSourceGoogleComputeRegionNetworkEndpointGroup() *schema.Resource { func dataSourceComputeRegionNetworkEndpointGroupRead(d *schema.ResourceData, meta interface{}) error { config := meta.(*transport_tpg.Config) + id := "" if name, ok := d.GetOk("name"); ok { project, err := tpgresource.GetProject(d, config) if err != nil { @@ -38,7 +39,7 @@ func dataSourceComputeRegionNetworkEndpointGroupRead(d *schema.ResourceData, met return err } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/networkEndpointGroups/%s", project, region, name.(string))) + id = fmt.Sprintf("projects/%s/regions/%s/networkEndpointGroups/%s", project, region, name.(string)) } else if selfLink, ok := d.GetOk("self_link"); ok { parsed, err := tpgresource.ParseNetworkEndpointGroupRegionalFieldValue(selfLink.(string), d, config) if err != nil { @@ -54,10 +55,19 @@ func dataSourceComputeRegionNetworkEndpointGroupRead(d *schema.ResourceData, met return fmt.Errorf("Error setting region: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/networkEndpointGroups/%s", parsed.Project, parsed.Region, parsed.Name)) + id = fmt.Sprintf("projects/%s/regions/%s/networkEndpointGroups/%s", parsed.Project, parsed.Region, parsed.Name) } else { return errors.New("Must provide either `self_link` or `region/name`") } + d.SetId(id) + err := resourceComputeRegionNetworkEndpointGroupRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } - return resourceComputeRegionNetworkEndpointGroupRead(d, meta) + return nil } diff --git a/google/services/compute/data_source_google_compute_region_ssl_certificate.go b/google/services/compute/data_source_google_compute_region_ssl_certificate.go index 7ad067d252d..4f4c3617c84 100644 --- a/google/services/compute/data_source_google_compute_region_ssl_certificate.go +++ b/google/services/compute/data_source_google_compute_region_ssl_certificate.go @@ -35,7 +35,16 @@ func dataSourceComputeRegionSslCertificateRead(d *schema.ResourceData, meta inte return err } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/sslCertificates/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/regions/%s/sslCertificates/%s", project, region, name) + d.SetId(id) - return resourceComputeRegionSslCertificateRead(d, meta) + err = resourceComputeRegionSslCertificateRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/compute/data_source_google_compute_resource_policy.go b/google/services/compute/data_source_google_compute_resource_policy.go index 89fb4ea20cc..dddbb1faff9 100644 --- a/google/services/compute/data_source_google_compute_resource_policy.go +++ b/google/services/compute/data_source_google_compute_resource_policy.go @@ -37,7 +37,17 @@ func dataSourceGoogleComputeResourcePolicyRead(d *schema.ResourceData, meta inte return err } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/resourcePolicies/%s", project, region, name)) + id := fmt.Sprintf("projects/%s/regions/%s/resourcePolicies/%s", project, region, name) + d.SetId(id) - return resourceComputeResourcePolicyRead(d, meta) + err = resourceComputeResourcePolicyRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_router.go b/google/services/compute/data_source_google_compute_router.go index 5b21f09e219..85a4107e797 100644 --- a/google/services/compute/data_source_google_compute_router.go +++ b/google/services/compute/data_source_google_compute_router.go @@ -3,6 +3,8 @@ package compute import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" ) @@ -24,5 +26,13 @@ func dataSourceComputeRouterRead(d *schema.ResourceData, meta interface{}) error routerName := d.Get("name").(string) d.SetId(routerName) - return resourceComputeRouterRead(d, meta) + err := resourceComputeRouterRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", routerName) + } + return nil } diff --git a/google/services/compute/data_source_google_compute_router_nat.go b/google/services/compute/data_source_google_compute_router_nat.go index 793abf2fcbb..1af79761e2c 100644 --- a/google/services/compute/data_source_google_compute_router_nat.go +++ b/google/services/compute/data_source_google_compute_router_nat.go @@ -33,5 +33,14 @@ func dataSourceGoogleComputeRouterNatRead(d *schema.ResourceData, meta interface } d.SetId(id) - return resourceComputeRouterNatRead(d, meta) + err = resourceComputeRouterNatRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_snapshot.go b/google/services/compute/data_source_google_compute_snapshot.go index 316f4d6b2e4..36aa06ca249 100644 --- a/google/services/compute/data_source_google_compute_snapshot.go +++ b/google/services/compute/data_source_google_compute_snapshot.go @@ -104,7 +104,10 @@ func dataSourceGoogleComputeSnapshotRead(d *schema.ResourceData, meta interface{ func retrieveSnapshot(d *schema.ResourceData, meta interface{}, project, name string) error { d.SetId("projects/" + project + "/global/snapshots/" + name) d.Set("name", name) - return resourceComputeSnapshotRead(d, meta) + if err := resourceComputeSnapshotRead(d, meta); err != nil { + return err + } + return tpgresource.SetDataSourceLabels(d) } // ByCreationTimestamp implements sort.Interface for []*Snapshot based on diff --git a/google/services/compute/data_source_google_compute_ssl_certificate.go b/google/services/compute/data_source_google_compute_ssl_certificate.go index 522f4e2c3b6..569b110dade 100644 --- a/google/services/compute/data_source_google_compute_ssl_certificate.go +++ b/google/services/compute/data_source_google_compute_ssl_certificate.go @@ -35,7 +35,17 @@ func dataSourceComputeSslCertificateRead(d *schema.ResourceData, meta interface{ } certificateName := d.Get("name").(string) - d.SetId(fmt.Sprintf("projects/%s/global/sslCertificates/%s", project, certificateName)) + id := fmt.Sprintf("projects/%s/global/sslCertificates/%s", project, certificateName) + d.SetId(id) - return resourceComputeSslCertificateRead(d, meta) + err = resourceComputeSslCertificateRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_ssl_policy.go b/google/services/compute/data_source_google_compute_ssl_policy.go index 5fd16673743..434b27be55a 100644 --- a/google/services/compute/data_source_google_compute_ssl_policy.go +++ b/google/services/compute/data_source_google_compute_ssl_policy.go @@ -35,7 +35,17 @@ func datasourceComputeSslPolicyRead(d *schema.ResourceData, meta interface{}) er } policyName := d.Get("name").(string) - d.SetId(fmt.Sprintf("projects/%s/global/sslPolicies/%s", project, policyName)) + id := fmt.Sprintf("projects/%s/global/sslPolicies/%s", project, policyName) + d.SetId(id) - return resourceComputeSslPolicyRead(d, meta) + err = resourceComputeSslPolicyRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_compute_subnetwork.go b/google/services/compute/data_source_google_compute_subnetwork.go index e94b5ecf1f4..a3ced3a06c2 100644 --- a/google/services/compute/data_source_google_compute_subnetwork.go +++ b/google/services/compute/data_source_google_compute_subnetwork.go @@ -87,10 +87,11 @@ func dataSourceGoogleComputeSubnetworkRead(d *schema.ResourceData, meta interfac if err != nil { return err } + id := fmt.Sprintf("projects/%s/regions/%s/subnetworks/%s", project, region, name) subnetwork, err := config.NewComputeClient(userAgent).Subnetworks.Get(project, region, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Subnetwork Not Found : %s", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Subnetwork Not Found : %s", name), id) } if err := d.Set("ip_cidr_range", subnetwork.IpCidrRange); err != nil { @@ -124,7 +125,7 @@ func dataSourceGoogleComputeSubnetworkRead(d *schema.ResourceData, meta interfac return fmt.Errorf("Error setting secondary_ip_range: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/subnetworks/%s", project, region, name)) + d.SetId(id) return nil } diff --git a/google/services/compute/data_source_google_compute_vpn_gateway.go b/google/services/compute/data_source_google_compute_vpn_gateway.go index 3fc6d855490..bf5ac233b54 100644 --- a/google/services/compute/data_source_google_compute_vpn_gateway.go +++ b/google/services/compute/data_source_google_compute_vpn_gateway.go @@ -69,12 +69,13 @@ func dataSourceGoogleComputeVpnGatewayRead(d *schema.ResourceData, meta interfac } name := d.Get("name").(string) + id := fmt.Sprintf("projects/%s/regions/%s/targetVpnGateways/%s", project, region, name) vpnGatewaysService := compute.NewTargetVpnGatewaysService(config.NewComputeClient(userAgent)) gateway, err := vpnGatewaysService.Get(project, region, name).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("VPN Gateway Not Found : %s", name)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("VPN Gateway Not Found : %s", name), id) } if err := d.Set("network", tpgresource.ConvertSelfLinkToV1(gateway.Network)); err != nil { return fmt.Errorf("Error setting network: %s", err) @@ -91,6 +92,6 @@ func dataSourceGoogleComputeVpnGatewayRead(d *schema.ResourceData, meta interfac if err := d.Set("project", project); err != nil { return fmt.Errorf("Error setting project: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/regions/%s/targetVpnGateways/%s", project, region, name)) + d.SetId(id) return nil } diff --git a/google/services/compute/data_source_google_global_compute_forwarding_rule.go b/google/services/compute/data_source_google_global_compute_forwarding_rule.go index 75b31c5da10..b60d1626b14 100644 --- a/google/services/compute/data_source_google_global_compute_forwarding_rule.go +++ b/google/services/compute/data_source_google_global_compute_forwarding_rule.go @@ -35,7 +35,21 @@ func dataSourceGoogleComputeGlobalForwardingRuleRead(d *schema.ResourceData, met return err } - d.SetId(fmt.Sprintf("projects/%s/global/forwardingRules/%s", project, name)) + id := fmt.Sprintf("projects/%s/global/forwardingRules/%s", project, name) + d.SetId(id) - return resourceComputeGlobalForwardingRuleRead(d, meta) + err = resourceComputeGlobalForwardingRuleRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + + return nil } diff --git a/google/services/compute/data_source_google_global_compute_forwarding_rule_test.go b/google/services/compute/data_source_google_global_compute_forwarding_rule_test.go index daef25a5164..1b244358733 100644 --- a/google/services/compute/data_source_google_global_compute_forwarding_rule_test.go +++ b/google/services/compute/data_source_google_global_compute_forwarding_rule_test.go @@ -34,6 +34,9 @@ resource "google_compute_global_forwarding_rule" "foobar-fr" { name = "%s" target = google_compute_target_http_proxy.default.id port_range = "80" + labels = { + my-label = "my-label-value" + } } resource "google_compute_target_http_proxy" "default" { diff --git a/google/services/compute/resource_compute_address.go b/google/services/compute/resource_compute_address.go index 6c892cf2275..f19d00a0956 100644 --- a/google/services/compute/resource_compute_address.go +++ b/google/services/compute/resource_compute_address.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,11 @@ func ResourceComputeAddress() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -479,10 +485,10 @@ func resourceComputeAddressDelete(d *schema.ResourceData, meta interface{}) erro func resourceComputeAddressImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/addresses/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/addresses/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_address_generated_test.go b/google/services/compute/resource_compute_address_generated_test.go index 98794e13a4a..fa4399835ed 100644 --- a/google/services/compute/resource_compute_address_generated_test.go +++ b/google/services/compute/resource_compute_address_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeAddress_addressBasicExample(t *testing.T) { ResourceName: "google_compute_address.ip_address", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"subnetwork", "network", "region"}, + ImportStateVerifyIgnore: []string{"subnetwork", "network", "region", "labels", "terraform_labels"}, }, }, }) @@ -82,7 +82,7 @@ func TestAccComputeAddress_addressWithSubnetworkExample(t *testing.T) { ResourceName: "google_compute_address.internal_with_subnet_and_address", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"subnetwork", "network", "region"}, + ImportStateVerifyIgnore: []string{"subnetwork", "network", "region", "labels", "terraform_labels"}, }, }, }) @@ -130,7 +130,7 @@ func TestAccComputeAddress_addressWithGceEndpointExample(t *testing.T) { ResourceName: "google_compute_address.internal_with_gce_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"subnetwork", "network", "region"}, + ImportStateVerifyIgnore: []string{"subnetwork", "network", "region", "labels", "terraform_labels"}, }, }, }) @@ -165,7 +165,7 @@ func TestAccComputeAddress_addressWithSharedLoadbalancerVipExample(t *testing.T) ResourceName: "google_compute_address.internal_with_shared_loadbalancer_vip", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"subnetwork", "network", "region"}, + ImportStateVerifyIgnore: []string{"subnetwork", "network", "region", "labels", "terraform_labels"}, }, }, }) @@ -200,7 +200,7 @@ func TestAccComputeAddress_instanceWithIpExample(t *testing.T) { ResourceName: "google_compute_address.static", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"subnetwork", "network", "region"}, + ImportStateVerifyIgnore: []string{"subnetwork", "network", "region", "labels", "terraform_labels"}, }, }, }) @@ -257,7 +257,7 @@ func TestAccComputeAddress_computeAddressIpsecInterconnectExample(t *testing.T) ResourceName: "google_compute_address.ipsec-interconnect-address", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"subnetwork", "network", "region"}, + ImportStateVerifyIgnore: []string{"subnetwork", "network", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_address_test.go b/google/services/compute/resource_compute_address_test.go index e542fa61f5c..216e4b1ace9 100644 --- a/google/services/compute/resource_compute_address_test.go +++ b/google/services/compute/resource_compute_address_test.go @@ -20,11 +20,10 @@ func TestAccComputeAddress_networkTier(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccComputeAddress_networkTier(acctest.RandString(t, 10)), - }, - { - ResourceName: "google_compute_address.foobar", - ImportState: true, - ImportStateVerify: true, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_compute_address.foobar", "labels.%"), + resource.TestCheckNoResourceAttr("google_compute_address.foobar", "effective_labels.%"), + ), }, }, }) @@ -138,7 +137,6 @@ resource "google_compute_network" "default" { enable_ula_internal_ipv6 = true auto_create_subnetworks = false } - resource "google_compute_subnetwork" "foo" { name = "subnetwork-test-%s" ip_cidr_range = "10.0.0.0/16" @@ -147,7 +145,6 @@ resource "google_compute_subnetwork" "foo" { stack_type = "IPV4_IPV6" ipv6_access_type = "INTERNAL" } - resource "google_compute_address" "ipv6" { name = "tf-test-address-internal-ipv6-%s" subnetwork = google_compute_subnetwork.foo.self_link diff --git a/google/services/compute/resource_compute_attached_disk.go b/google/services/compute/resource_compute_attached_disk.go index 7e5b610c2cd..f10dd99efa3 100644 --- a/google/services/compute/resource_compute_attached_disk.go +++ b/google/services/compute/resource_compute_attached_disk.go @@ -11,6 +11,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -32,6 +33,11 @@ func ResourceComputeAttachedDisk() *schema.Resource { Delete: schema.DefaultTimeout(300 * time.Second), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "disk": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_autoscaler.go b/google/services/compute/resource_compute_autoscaler.go index 8ff939552aa..0123f737a9c 100644 --- a/google/services/compute/resource_compute_autoscaler.go +++ b/google/services/compute/resource_compute_autoscaler.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceComputeAutoscaler() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "autoscaling_policy": { Type: schema.TypeList, @@ -465,6 +471,14 @@ func resourceComputeAutoscalerRead(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error reading Autoscaler: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading Autoscaler: %s", err) + } + if err := d.Set("creation_timestamp", flattenComputeAutoscalerCreationTimestamp(res["creationTimestamp"], d, config)); err != nil { return fmt.Errorf("Error reading Autoscaler: %s", err) } @@ -480,9 +494,6 @@ func resourceComputeAutoscalerRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("target", flattenComputeAutoscalerTarget(res["target"], d, config)); err != nil { return fmt.Errorf("Error reading Autoscaler: %s", err) } - if err := d.Set("zone", flattenComputeAutoscalerZone(res["zone"], d, config)); err != nil { - return fmt.Errorf("Error reading Autoscaler: %s", err) - } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading Autoscaler: %s", err) } @@ -632,10 +643,10 @@ func resourceComputeAutoscalerDelete(d *schema.ResourceData, meta interface{}) e func resourceComputeAutoscalerImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/autoscalers/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/autoscalers/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -980,13 +991,6 @@ func flattenComputeAutoscalerTarget(v interface{}, d *schema.ResourceData, confi return tpgresource.ConvertSelfLinkToV1(v.(string)) } -func flattenComputeAutoscalerZone(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - return tpgresource.ConvertSelfLinkToV1(v.(string)) -} - func expandComputeAutoscalerName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google/services/compute/resource_compute_backend_bucket.go b/google/services/compute/resource_compute_backend_bucket.go index 87558efed3c..e773422827e 100644 --- a/google/services/compute/resource_compute_backend_bucket.go +++ b/google/services/compute/resource_compute_backend_bucket.go @@ -24,6 +24,7 @@ import ( "time" "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceComputeBackendBucket() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "bucket_name": { Type: schema.TypeString, @@ -647,9 +652,9 @@ func resourceComputeBackendBucketDelete(d *schema.ResourceData, meta interface{} func resourceComputeBackendBucketImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/backendBuckets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/backendBuckets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_backend_bucket_signed_url_key.go b/google/services/compute/resource_compute_backend_bucket_signed_url_key.go index e137af82079..8750cc9fc2f 100644 --- a/google/services/compute/resource_compute_backend_bucket_signed_url_key.go +++ b/google/services/compute/resource_compute_backend_bucket_signed_url_key.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -41,6 +42,10 @@ func ResourceComputeBackendBucketSignedUrlKey() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backend_bucket": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_backend_service.go b/google/services/compute/resource_compute_backend_service.go index 6f8fcd062d6..cec9d0f6b73 100644 --- a/google/services/compute/resource_compute_backend_service.go +++ b/google/services/compute/resource_compute_backend_service.go @@ -26,6 +26,7 @@ import ( "time" "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -200,6 +201,9 @@ func ResourceComputeBackendService() *schema.Resource { }, SchemaVersion: 1, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "name": { @@ -1978,9 +1982,9 @@ func resourceComputeBackendServiceDelete(d *schema.ResourceData, meta interface{ func resourceComputeBackendServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/backendServices/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/backendServices/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_backend_service_signed_url_key.go b/google/services/compute/resource_compute_backend_service_signed_url_key.go index 332f95acc69..13536333d87 100644 --- a/google/services/compute/resource_compute_backend_service_signed_url_key.go +++ b/google/services/compute/resource_compute_backend_service_signed_url_key.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -41,6 +42,10 @@ func ResourceComputeBackendServiceSignedUrlKey() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backend_service": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_disk.go b/google/services/compute/resource_compute_disk.go index f36451f2f00..a41d33fd2e6 100644 --- a/google/services/compute/resource_compute_disk.go +++ b/google/services/compute/resource_compute_disk.go @@ -312,6 +312,8 @@ func ResourceComputeDisk() *schema.Resource { CustomizeDiff: customdiff.All( customdiff.ForceNewIfChange("size", IsDiskShrinkage), hyperDiskIopsUpdateDiffSupress, + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -441,10 +443,14 @@ For instance, the image 'centos-6-v20180104' includes its family name 'centos-6' These images can be referred by family name here.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this disk. A list of key->value pairs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this disk. A list of key->value pairs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "licenses": { Type: schema.TypeList, @@ -641,6 +647,12 @@ create the disk. Provide this when creating the disk.`, Computed: true, Description: `Creation timestamp in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, @@ -683,6 +695,13 @@ that was later deleted and recreated under the same name, the source snapshot ID would identify the exact version of the snapshot that was used.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "users": { Type: schema.TypeList, Computed: true, @@ -742,12 +761,6 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandComputeDiskLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } nameProp, err := expandComputeDiskName(d.Get("name"), d, config) if err != nil { return err @@ -814,6 +827,12 @@ func resourceComputeDiskCreate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("licenses"); !tpgresource.IsEmptyValue(reflect.ValueOf(licensesProp)) && (ok || !reflect.DeepEqual(v, licensesProp)) { obj["licenses"] = licensesProp } + labelsProp, err := expandComputeDiskEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } zoneProp, err := expandComputeDiskZone(d.Get("zone"), d, config) if err != nil { return err @@ -1013,6 +1032,12 @@ func resourceComputeDiskRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("licenses", flattenComputeDiskLicenses(res["licenses"], d, config)); err != nil { return fmt.Errorf("Error reading Disk: %s", err) } + if err := d.Set("terraform_labels", flattenComputeDiskTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Disk: %s", err) + } + if err := d.Set("effective_labels", flattenComputeDiskEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Disk: %s", err) + } if err := d.Set("zone", flattenComputeDiskZone(res["zone"], d, config)); err != nil { return fmt.Errorf("Error reading Disk: %s", err) } @@ -1058,7 +1083,7 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { d.Partial(true) - if d.HasChange("label_fingerprint") || d.HasChange("labels") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) labelFingerprintProp, err := expandComputeDiskLabelFingerprint(d.Get("label_fingerprint"), d, config) @@ -1067,10 +1092,10 @@ func resourceComputeDiskUpdate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } - labelsProp, err := expandComputeDiskLabels(d.Get("labels"), d, config) + labelsProp, err := expandComputeDiskEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -1374,10 +1399,10 @@ func resourceComputeDiskDelete(d *schema.ResourceData, meta interface{}) error { func resourceComputeDiskImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/disks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/disks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1413,7 +1438,18 @@ func flattenComputeDiskLastDetachTimestamp(v interface{}, d *schema.ResourceData } func flattenComputeDiskLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeDiskName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1560,6 +1596,25 @@ func flattenComputeDiskLicenses(v interface{}, d *schema.ResourceData, config *t return tpgresource.ConvertAndMapStringArr(v.([]interface{}), tpgresource.ConvertSelfLinkToV1) } +func flattenComputeDiskTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenComputeDiskEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenComputeDiskZone(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -1701,17 +1756,6 @@ func expandComputeDiskDescription(v interface{}, d tpgresource.TerraformResource return v, nil } -func expandComputeDiskLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandComputeDiskName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1814,6 +1858,17 @@ func expandComputeDiskLicenses(v interface{}, d tpgresource.TerraformResourceDat return req, nil } +func expandComputeDiskEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func expandComputeDiskZone(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { f, err := tpgresource.ParseGlobalFieldValue("zones", v.(string), "project", d, config, true) if err != nil { diff --git a/google/services/compute/resource_compute_disk_generated_test.go b/google/services/compute/resource_compute_disk_generated_test.go index 7bdd4b74d15..43c04b9ec25 100644 --- a/google/services/compute/resource_compute_disk_generated_test.go +++ b/google/services/compute/resource_compute_disk_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeDisk_diskBasicExample(t *testing.T) { ResourceName: "google_compute_disk.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"type", "zone", "snapshot"}, + ImportStateVerifyIgnore: []string{"type", "zone", "snapshot", "labels", "terraform_labels"}, }, }, }) @@ -89,7 +89,7 @@ func TestAccComputeDisk_diskAsyncExample(t *testing.T) { ResourceName: "google_compute_disk.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"type", "zone", "snapshot"}, + ImportStateVerifyIgnore: []string{"type", "zone", "snapshot", "labels", "terraform_labels"}, }, }, }) @@ -138,7 +138,7 @@ func TestAccComputeDisk_diskFeaturesExample(t *testing.T) { ResourceName: "google_compute_disk.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"type", "zone", "snapshot"}, + ImportStateVerifyIgnore: []string{"type", "zone", "snapshot", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_disk_resource_policy_attachment.go b/google/services/compute/resource_compute_disk_resource_policy_attachment.go index bbfd5c9d26a..8d37a7e33c2 100644 --- a/google/services/compute/resource_compute_disk_resource_policy_attachment.go +++ b/google/services/compute/resource_compute_disk_resource_policy_attachment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,11 @@ func ResourceComputeDiskResourcePolicyAttachment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "disk": { Type: schema.TypeString, @@ -216,6 +222,14 @@ func resourceComputeDiskResourcePolicyAttachmentRead(d *schema.ResourceData, met return fmt.Errorf("Error reading DiskResourcePolicyAttachment: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading DiskResourcePolicyAttachment: %s", err) + } + if err := d.Set("name", flattenNestedComputeDiskResourcePolicyAttachmentName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading DiskResourcePolicyAttachment: %s", err) } @@ -304,10 +318,10 @@ func resourceComputeDiskResourcePolicyAttachmentDelete(d *schema.ResourceData, m func resourceComputeDiskResourcePolicyAttachmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/disks/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/disks/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_disk_test.go b/google/services/compute/resource_compute_disk_test.go index cc428662293..614a4a27b56 100644 --- a/google/services/compute/resource_compute_disk_test.go +++ b/google/services/compute/resource_compute_disk_test.go @@ -337,17 +337,19 @@ func TestAccComputeDisk_update(t *testing.T) { Config: testAccComputeDisk_basic(diskName), }, { - ResourceName: "google_compute_disk.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_disk.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccComputeDisk_updated(diskName), }, { - ResourceName: "google_compute_disk.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_disk.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -729,9 +731,10 @@ func TestAccComputeDisk_cloneDisk(t *testing.T) { ), }, { - ResourceName: "google_compute_disk.disk-clone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_disk.disk-clone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -750,17 +753,19 @@ func TestAccComputeDisk_featuresUpdated(t *testing.T) { Config: testAccComputeDisk_features(diskName), }, { - ResourceName: "google_compute_disk.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_disk.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccComputeDisk_featuresUpdated(diskName), }, { - ResourceName: "google_compute_disk.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_disk.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_external_vpn_gateway.go b/google/services/compute/resource_compute_external_vpn_gateway.go index c0a64161e69..dfce30fbff1 100644 --- a/google/services/compute/resource_compute_external_vpn_gateway.go +++ b/google/services/compute/resource_compute_external_vpn_gateway.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceComputeExternalVpnGateway() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -96,10 +102,13 @@ it cannot be an IP address from Google Compute Engine.`, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels for the external VPN gateway resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels for the external VPN gateway resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "redundancy_type": { Type: schema.TypeString, @@ -108,12 +117,25 @@ it cannot be an IP address from Google Compute Engine.`, ValidateFunc: verify.ValidateEnum([]string{"FOUR_IPS_REDUNDANCY", "SINGLE_IP_INTERNALLY_REDUNDANT", "TWO_IPS_REDUNDANCY", ""}), Description: `Indicates the redundancy type of this external VPN gateway Possible values: ["FOUR_IPS_REDUNDANCY", "SINGLE_IP_INTERNALLY_REDUNDANT", "TWO_IPS_REDUNDANCY"]`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, Description: `The fingerprint used for optimistic locking of this resource. Used internally during updates.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -143,12 +165,6 @@ func resourceComputeExternalVpnGatewayCreate(d *schema.ResourceData, meta interf } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandComputeExternalVpnGatewayLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeExternalVpnGatewayLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err @@ -173,6 +189,12 @@ func resourceComputeExternalVpnGatewayCreate(d *schema.ResourceData, meta interf } else if v, ok := d.GetOkExists("interface"); !tpgresource.IsEmptyValue(reflect.ValueOf(interfacesProp)) && (ok || !reflect.DeepEqual(v, interfacesProp)) { obj["interfaces"] = interfacesProp } + labelsProp, err := expandComputeExternalVpnGatewayEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/externalVpnGateways") if err != nil { @@ -286,6 +308,12 @@ func resourceComputeExternalVpnGatewayRead(d *schema.ResourceData, meta interfac if err := d.Set("interface", flattenComputeExternalVpnGatewayInterface(res["interfaces"], d, config)); err != nil { return fmt.Errorf("Error reading ExternalVpnGateway: %s", err) } + if err := d.Set("terraform_labels", flattenComputeExternalVpnGatewayTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ExternalVpnGateway: %s", err) + } + if err := d.Set("effective_labels", flattenComputeExternalVpnGatewayEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ExternalVpnGateway: %s", err) + } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading ExternalVpnGateway: %s", err) } @@ -310,21 +338,21 @@ func resourceComputeExternalVpnGatewayUpdate(d *schema.ResourceData, meta interf d.Partial(true) - if d.HasChange("labels") || d.HasChange("label_fingerprint") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) - labelsProp, err := expandComputeExternalVpnGatewayLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeExternalVpnGatewayLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } + labelsProp, err := expandComputeExternalVpnGatewayEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/externalVpnGateways/{{name}}/setLabels") if err != nil { @@ -420,9 +448,9 @@ func resourceComputeExternalVpnGatewayDelete(d *schema.ResourceData, meta interf func resourceComputeExternalVpnGatewayImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/externalVpnGateways/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/externalVpnGateways/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -442,7 +470,18 @@ func flattenComputeExternalVpnGatewayDescription(v interface{}, d *schema.Resour } func flattenComputeExternalVpnGatewayLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeExternalVpnGatewayLabelFingerprint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -497,19 +536,27 @@ func flattenComputeExternalVpnGatewayInterfaceIpAddress(v interface{}, d *schema return v } -func expandComputeExternalVpnGatewayDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandComputeExternalVpnGatewayLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenComputeExternalVpnGatewayTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenComputeExternalVpnGatewayEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandComputeExternalVpnGatewayDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandComputeExternalVpnGatewayLabelFingerprint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -560,3 +607,14 @@ func expandComputeExternalVpnGatewayInterfaceId(v interface{}, d tpgresource.Ter func expandComputeExternalVpnGatewayInterfaceIpAddress(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandComputeExternalVpnGatewayEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/compute/resource_compute_external_vpn_gateway_generated_test.go b/google/services/compute/resource_compute_external_vpn_gateway_generated_test.go index ea597fa1bf8..43fb05e2b83 100644 --- a/google/services/compute/resource_compute_external_vpn_gateway_generated_test.go +++ b/google/services/compute/resource_compute_external_vpn_gateway_generated_test.go @@ -47,9 +47,10 @@ func TestAccComputeExternalVpnGateway_externalVpnGatewayExample(t *testing.T) { Config: testAccComputeExternalVpnGateway_externalVpnGatewayExample(context), }, { - ResourceName: "google_compute_external_vpn_gateway.external_gateway", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_external_vpn_gateway.external_gateway", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -177,9 +178,10 @@ func TestAccComputeExternalVpnGateway_onlyExternalVpnGatewayFullExample(t *testi Config: testAccComputeExternalVpnGateway_onlyExternalVpnGatewayFullExample(context), }, { - ResourceName: "google_compute_external_vpn_gateway.external_gateway", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_external_vpn_gateway.external_gateway", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -197,7 +199,6 @@ resource "google_compute_external_vpn_gateway" "external_gateway" { } labels = { key = "value" - otherkey = "" } } `, context) diff --git a/google/services/compute/resource_compute_external_vpn_gateway_test.go b/google/services/compute/resource_compute_external_vpn_gateway_test.go index 579c557c094..d03caecf39a 100644 --- a/google/services/compute/resource_compute_external_vpn_gateway_test.go +++ b/google/services/compute/resource_compute_external_vpn_gateway_test.go @@ -28,9 +28,10 @@ func TestAccComputeExternalVPNGateway_updateLabels(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccComputeExternalVPNGateway_updateLabels(rnd, "test-updated", "test-updated"), @@ -40,9 +41,10 @@ func TestAccComputeExternalVPNGateway_updateLabels(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_firewall.go b/google/services/compute/resource_compute_firewall.go index 4dad3fdfb4d..5de7372794f 100644 --- a/google/services/compute/resource_compute_firewall.go +++ b/google/services/compute/resource_compute_firewall.go @@ -159,6 +159,7 @@ func ResourceComputeFirewall() *schema.Resource { CustomizeDiff: customdiff.All( resourceComputeFirewallEnableLoggingCustomizeDiff, resourceComputeFirewallSourceFieldsCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -879,9 +880,9 @@ func resourceComputeFirewallDelete(d *schema.ResourceData, meta interface{}) err func resourceComputeFirewallImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/firewalls/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/firewalls/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_firewall_policy.go b/google/services/compute/resource_compute_firewall_policy.go index 55d00b7dac3..f65174eb735 100644 --- a/google/services/compute/resource_compute_firewall_policy.go +++ b/google/services/compute/resource_compute_firewall_policy.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,9 @@ func ResourceComputeFirewallPolicy() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "parent": { diff --git a/google/services/compute/resource_compute_firewall_policy_association.go b/google/services/compute/resource_compute_firewall_policy_association.go index 9bd518717c9..4f7772e0751 100644 --- a/google/services/compute/resource_compute_firewall_policy_association.go +++ b/google/services/compute/resource_compute_firewall_policy_association.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,9 @@ func ResourceComputeFirewallPolicyAssociation() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "attachment_target": { diff --git a/google/services/compute/resource_compute_firewall_policy_rule.go b/google/services/compute/resource_compute_firewall_policy_rule.go index f6dd7c060fa..93ff6e04be6 100644 --- a/google/services/compute/resource_compute_firewall_policy_rule.go +++ b/google/services/compute/resource_compute_firewall_policy_rule.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,9 @@ func ResourceComputeFirewallPolicyRule() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "action": { diff --git a/google/services/compute/resource_compute_forwarding_rule.go b/google/services/compute/resource_compute_forwarding_rule.go index d918127458a..b8e13530b81 100644 --- a/google/services/compute/resource_compute_forwarding_rule.go +++ b/google/services/compute/resource_compute_forwarding_rule.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceComputeForwardingRule() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -215,10 +221,14 @@ This can only be set to true for load balancers that have their 'loadBalancingScheme' set to 'INTERNAL'.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this forwarding rule. A list of key->value pairs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this forwarding rule. A list of key->value pairs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "load_balancing_scheme": { Type: schema.TypeString, @@ -440,6 +450,12 @@ For Private Service Connect forwarding rules that forward traffic to managed ser Computed: true, Description: `Creation timestamp in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, @@ -463,6 +479,13 @@ internally during updates.`, This field is only used for INTERNAL load balancing.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -564,12 +587,6 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("allow_global_access"); ok || !reflect.DeepEqual(v, allowGlobalAccessProp) { obj["allowGlobalAccess"] = allowGlobalAccessProp } - labelsProp, err := expandComputeForwardingRuleLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeForwardingRuleLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err @@ -624,6 +641,12 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("ip_version"); !tpgresource.IsEmptyValue(reflect.ValueOf(ipVersionProp)) && (ok || !reflect.DeepEqual(v, ipVersionProp)) { obj["ipVersion"] = ipVersionProp } + labelsProp, err := expandComputeForwardingRuleEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } regionProp, err := expandComputeForwardingRuleRegion(d.Get("region"), d, config) if err != nil { return err @@ -682,7 +705,10 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error waiting to create ForwardingRule: %s", err) } - if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + labels := d.Get("labels") + terraformLables := d.Get("terraform_labels") + // Labels cannot be set in a create. We'll have to set them here. err = resourceComputeForwardingRuleRead(d, meta) if err != nil { @@ -690,8 +716,8 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ } obj := make(map[string]interface{}) - // d.Get("labels") will have been overridden by the Read call. - labelsProp, err := expandComputeForwardingRuleLabels(v, d, config) + // d.Get("effective_labels") will have been overridden by the Read call. + labelsProp, err := expandComputeForwardingRuleEffectiveLabels(v, d, config) if err != nil { return err } @@ -723,6 +749,20 @@ func resourceComputeForwardingRuleCreate(d *schema.ResourceData, meta interface{ return err } + // Set back the labels field, as it is needed to decide the value of "labels" in the state in the read function. + if err := d.Set("labels", labels); err != nil { + return fmt.Errorf("Error setting back labels: %s", err) + } + + // Set back the terraform_labels field, as it is needed to decide the value of "terraform_labels" in the state in the read function. + if err := d.Set("terraform_labels", terraformLables); err != nil { + return fmt.Errorf("Error setting back terraform_labels: %s", err) + } + + // Set back the effective_labels field, as it is needed to decide the value of "effective_labels" in the state in the read function. + if err := d.Set("effective_labels", v); err != nil { + return fmt.Errorf("Error setting back effective_labels: %s", err) + } } log.Printf("[DEBUG] Finished creating ForwardingRule %q: %#v", d.Id(), res) @@ -852,6 +892,12 @@ func resourceComputeForwardingRuleRead(d *schema.ResourceData, meta interface{}) if err := d.Set("ip_version", flattenComputeForwardingRuleIpVersion(res["ipVersion"], d, config)); err != nil { return fmt.Errorf("Error reading ForwardingRule: %s", err) } + if err := d.Set("terraform_labels", flattenComputeForwardingRuleTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ForwardingRule: %s", err) + } + if err := d.Set("effective_labels", flattenComputeForwardingRuleEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ForwardingRule: %s", err) + } if err := d.Set("region", flattenComputeForwardingRuleRegion(res["region"], d, config)); err != nil { return fmt.Errorf("Error reading ForwardingRule: %s", err) } @@ -965,21 +1011,21 @@ func resourceComputeForwardingRuleUpdate(d *schema.ResourceData, meta interface{ return err } } - if d.HasChange("labels") || d.HasChange("label_fingerprint") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) - labelsProp, err := expandComputeForwardingRuleLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeForwardingRuleLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } + labelsProp, err := expandComputeForwardingRuleEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/regions/{{region}}/forwardingRules/{{name}}/setLabels") if err != nil { @@ -1143,10 +1189,10 @@ func resourceComputeForwardingRuleDelete(d *schema.ResourceData, meta interface{ func resourceComputeForwardingRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/forwardingRules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/forwardingRules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1239,7 +1285,18 @@ func flattenComputeForwardingRuleAllowGlobalAccess(v interface{}, d *schema.Reso } func flattenComputeForwardingRuleLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeForwardingRuleLabelFingerprint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1305,6 +1362,25 @@ func flattenComputeForwardingRuleIpVersion(v interface{}, d *schema.ResourceData return v } +func flattenComputeForwardingRuleTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenComputeForwardingRuleEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenComputeForwardingRuleRegion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -1430,17 +1506,6 @@ func expandComputeForwardingRuleAllowGlobalAccess(v interface{}, d tpgresource.T return v, nil } -func expandComputeForwardingRuleLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandComputeForwardingRuleLabelFingerprint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1510,6 +1575,17 @@ func expandComputeForwardingRuleIpVersion(v interface{}, d tpgresource.Terraform return v, nil } +func expandComputeForwardingRuleEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func expandComputeForwardingRuleRegion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { f, err := tpgresource.ParseGlobalFieldValue("regions", v.(string), "project", d, config, true) if err != nil { diff --git a/google/services/compute/resource_compute_forwarding_rule_generated_test.go b/google/services/compute/resource_compute_forwarding_rule_generated_test.go index d74b20afffb..e9a20bdb168 100644 --- a/google/services/compute/resource_compute_forwarding_rule_generated_test.go +++ b/google/services/compute/resource_compute_forwarding_rule_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeForwardingRule_forwardingRuleGlobalInternallbExample(t *testi ResourceName: "google_compute_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "labels", "terraform_labels"}, }, }, }) @@ -113,7 +113,7 @@ func TestAccComputeForwardingRule_forwardingRuleBasicExample(t *testing.T) { ResourceName: "google_compute_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "labels", "terraform_labels"}, }, }, }) @@ -152,7 +152,7 @@ func TestAccComputeForwardingRule_forwardingRuleInternallbExample(t *testing.T) ResourceName: "google_compute_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "labels", "terraform_labels"}, }, }, }) @@ -222,7 +222,7 @@ func TestAccComputeForwardingRule_forwardingRuleVpcPscExample(t *testing.T) { ResourceName: "google_compute_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "ip_address"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "ip_address", "labels", "terraform_labels"}, }, }, }) @@ -346,7 +346,7 @@ func TestAccComputeForwardingRule_forwardingRuleVpcPscNoAutomateDnsExample(t *te ResourceName: "google_compute_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "ip_address"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "ip_address", "labels", "terraform_labels"}, }, }, }) @@ -466,7 +466,7 @@ func TestAccComputeForwardingRule_forwardingRuleRegionalSteeringExample(t *testi ResourceName: "google_compute_forwarding_rule.steering", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "labels", "terraform_labels"}, }, }, }) @@ -524,7 +524,7 @@ func TestAccComputeForwardingRule_forwardingRuleInternallbIpv6Example(t *testing ResourceName: "google_compute_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target"}, + ImportStateVerifyIgnore: []string{"backend_service", "network", "subnetwork", "no_automate_dns_zone", "region", "port_range", "target", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_forwarding_rule_test.go b/google/services/compute/resource_compute_forwarding_rule_test.go index 18778125083..19c2e7e3c1d 100644 --- a/google/services/compute/resource_compute_forwarding_rule_test.go +++ b/google/services/compute/resource_compute_forwarding_rule_test.go @@ -25,17 +25,19 @@ func TestAccComputeForwardingRule_update(t *testing.T) { Config: testAccComputeForwardingRule_basic(poolName, ruleName), }, { - ResourceName: "google_compute_forwarding_rule.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_forwarding_rule.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccComputeForwardingRule_update(poolName, ruleName), }, { - ResourceName: "google_compute_forwarding_rule.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_forwarding_rule.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_global_address.go b/google/services/compute/resource_compute_global_address.go index b4c1f613e3e..40028dd8120 100644 --- a/google/services/compute/resource_compute_global_address.go +++ b/google/services/compute/resource_compute_global_address.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,11 @@ func ResourceComputeGlobalAddress() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -386,9 +392,9 @@ func resourceComputeGlobalAddressDelete(d *schema.ResourceData, meta interface{} func resourceComputeGlobalAddressImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/addresses/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/addresses/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_global_address_generated_test.go b/google/services/compute/resource_compute_global_address_generated_test.go index e9d1bcbc1bc..d6ad1d538ab 100644 --- a/google/services/compute/resource_compute_global_address_generated_test.go +++ b/google/services/compute/resource_compute_global_address_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeGlobalAddress_globalAddressBasicExample(t *testing.T) { ResourceName: "google_compute_global_address.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"network"}, + ImportStateVerifyIgnore: []string{"network", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_global_forwarding_rule.go b/google/services/compute/resource_compute_global_forwarding_rule.go index 8fc9232f050..10c3188d70c 100644 --- a/google/services/compute/resource_compute_global_forwarding_rule.go +++ b/google/services/compute/resource_compute_global_forwarding_rule.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceComputeGlobalForwardingRule() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -165,10 +171,14 @@ you create the resource.`, Description: `The IP Version that will be used by this global forwarding rule. Possible values: ["IPV4", "IPV6"]`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this forwarding rule. A list of key->value pairs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this forwarding rule. A list of key->value pairs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "load_balancing_scheme": { Type: schema.TypeString, @@ -324,6 +334,12 @@ mode or when creating external forwarding rule with IPv6.`, Computed: true, Description: `[Output Only] The URL for the corresponding base Forwarding Rule. By base Forwarding Rule, we mean the Forwarding Rule that has the same IP address, protocol, and port settings with the current Forwarding Rule, but without sourceIPRanges specified. Always empty if the current Forwarding Rule does not have sourceIPRanges specified.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, @@ -340,6 +356,13 @@ internally during updates.`, Computed: true, Description: `The PSC connection status of the PSC Forwarding Rule. Possible values: 'STATUS_UNSPECIFIED', 'PENDING', 'ACCEPTED', 'REJECTED', 'CLOSED'`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -387,12 +410,6 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("ip_version"); !tpgresource.IsEmptyValue(reflect.ValueOf(ipVersionProp)) && (ok || !reflect.DeepEqual(v, ipVersionProp)) { obj["ipVersion"] = ipVersionProp } - labelsProp, err := expandComputeGlobalForwardingRuleLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeGlobalForwardingRuleLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err @@ -453,6 +470,12 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("no_automate_dns_zone"); ok || !reflect.DeepEqual(v, noAutomateDnsZoneProp) { obj["noAutomateDnsZone"] = noAutomateDnsZoneProp } + labelsProp, err := expandComputeGlobalForwardingRuleEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/forwardingRules") if err != nil { @@ -505,7 +528,10 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte return fmt.Errorf("Error waiting to create GlobalForwardingRule: %s", err) } - if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + labels := d.Get("labels") + terraformLables := d.Get("terraform_labels") + // Labels cannot be set in a create. We'll have to set them here. err = resourceComputeGlobalForwardingRuleRead(d, meta) if err != nil { @@ -513,8 +539,8 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte } obj := make(map[string]interface{}) - // d.Get("labels") will have been overridden by the Read call. - labelsProp, err := expandComputeGlobalForwardingRuleLabels(v, d, config) + // d.Get("effective_labels") will have been overridden by the Read call. + labelsProp, err := expandComputeGlobalForwardingRuleEffectiveLabels(v, d, config) if err != nil { return err } @@ -546,6 +572,20 @@ func resourceComputeGlobalForwardingRuleCreate(d *schema.ResourceData, meta inte return err } + // Set back the labels field, as it is needed to decide the value of "labels" in the state in the read function. + if err := d.Set("labels", labels); err != nil { + return fmt.Errorf("Error setting back labels: %s", err) + } + + // Set back the terraform_labels field, as it is needed to decide the value of "terraform_labels" in the state in the read function. + if err := d.Set("terraform_labels", terraformLables); err != nil { + return fmt.Errorf("Error setting back terraform_labels: %s", err) + } + + // Set back the effective_labels field, as it is needed to decide the value of "effective_labels" in the state in the read function. + if err := d.Set("effective_labels", v); err != nil { + return fmt.Errorf("Error setting back effective_labels: %s", err) + } } log.Printf("[DEBUG] Finished creating GlobalForwardingRule %q: %#v", d.Id(), res) @@ -645,6 +685,12 @@ func resourceComputeGlobalForwardingRuleRead(d *schema.ResourceData, meta interf if err := d.Set("base_forwarding_rule", flattenComputeGlobalForwardingRuleBaseForwardingRule(res["baseForwardingRule"], d, config)); err != nil { return fmt.Errorf("Error reading GlobalForwardingRule: %s", err) } + if err := d.Set("terraform_labels", flattenComputeGlobalForwardingRuleTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading GlobalForwardingRule: %s", err) + } + if err := d.Set("effective_labels", flattenComputeGlobalForwardingRuleEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading GlobalForwardingRule: %s", err) + } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading GlobalForwardingRule: %s", err) } @@ -669,21 +715,21 @@ func resourceComputeGlobalForwardingRuleUpdate(d *schema.ResourceData, meta inte d.Partial(true) - if d.HasChange("labels") || d.HasChange("label_fingerprint") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) - labelsProp, err := expandComputeGlobalForwardingRuleLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeGlobalForwardingRuleLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } + labelsProp, err := expandComputeGlobalForwardingRuleEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/forwardingRules/{{name}}/setLabels") if err != nil { @@ -824,9 +870,9 @@ func resourceComputeGlobalForwardingRuleDelete(d *schema.ResourceData, meta inte func resourceComputeGlobalForwardingRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/forwardingRules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/forwardingRules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -867,7 +913,18 @@ func flattenComputeGlobalForwardingRuleIpVersion(v interface{}, d *schema.Resour } func flattenComputeGlobalForwardingRuleLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeGlobalForwardingRuleLabelFingerprint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -962,6 +1019,25 @@ func flattenComputeGlobalForwardingRuleBaseForwardingRule(v interface{}, d *sche return v } +func flattenComputeGlobalForwardingRuleTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenComputeGlobalForwardingRuleEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandComputeGlobalForwardingRuleDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -978,17 +1054,6 @@ func expandComputeGlobalForwardingRuleIpVersion(v interface{}, d tpgresource.Ter return v, nil } -func expandComputeGlobalForwardingRuleLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandComputeGlobalForwardingRuleLabelFingerprint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1102,3 +1167,14 @@ func expandComputeGlobalForwardingRuleSourceIpRanges(v interface{}, d tpgresourc func expandComputeGlobalForwardingRuleNoAutomateDnsZone(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandComputeGlobalForwardingRuleEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/compute/resource_compute_global_forwarding_rule_generated_test.go b/google/services/compute/resource_compute_global_forwarding_rule_generated_test.go index fa3bd6bcae5..083ca2f5d47 100644 --- a/google/services/compute/resource_compute_global_forwarding_rule_generated_test.go +++ b/google/services/compute/resource_compute_global_forwarding_rule_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeGlobalForwardingRule_globalForwardingRuleHttpExample(t *testi ResourceName: "google_compute_global_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"network", "subnetwork", "no_automate_dns_zone", "port_range", "target"}, + ImportStateVerifyIgnore: []string{"network", "subnetwork", "no_automate_dns_zone", "port_range", "target", "labels", "terraform_labels"}, }, }, }) @@ -127,7 +127,7 @@ func TestAccComputeGlobalForwardingRule_globalForwardingRuleExternalManagedExamp ResourceName: "google_compute_global_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"network", "subnetwork", "no_automate_dns_zone", "port_range", "target"}, + ImportStateVerifyIgnore: []string{"network", "subnetwork", "no_automate_dns_zone", "port_range", "target", "labels", "terraform_labels"}, }, }, }) @@ -198,7 +198,7 @@ func TestAccComputeGlobalForwardingRule_globalForwardingRuleHybridExample(t *tes ResourceName: "google_compute_global_forwarding_rule.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"network", "subnetwork", "no_automate_dns_zone", "port_range", "target"}, + ImportStateVerifyIgnore: []string{"network", "subnetwork", "no_automate_dns_zone", "port_range", "target", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_global_network_endpoint.go b/google/services/compute/resource_compute_global_network_endpoint.go index d2cfa40c71e..33f4b94118e 100644 --- a/google/services/compute/resource_compute_global_network_endpoint.go +++ b/google/services/compute/resource_compute_global_network_endpoint.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceComputeGlobalNetworkEndpoint() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "global_network_endpoint_group": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_global_network_endpoint_group.go b/google/services/compute/resource_compute_global_network_endpoint_group.go index c344761368b..32f3dde3c57 100644 --- a/google/services/compute/resource_compute_global_network_endpoint_group.go +++ b/google/services/compute/resource_compute_global_network_endpoint_group.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeGlobalNetworkEndpointGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -297,9 +302,9 @@ func resourceComputeGlobalNetworkEndpointGroupDelete(d *schema.ResourceData, met func resourceComputeGlobalNetworkEndpointGroupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/networkEndpointGroups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/networkEndpointGroups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_ha_vpn_gateway.go b/google/services/compute/resource_compute_ha_vpn_gateway.go index b61d31026aa..a0c0bcc5bbe 100644 --- a/google/services/compute/resource_compute_ha_vpn_gateway.go +++ b/google/services/compute/resource_compute_ha_vpn_gateway.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeHaVpnGateway() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -359,10 +364,10 @@ func resourceComputeHaVpnGatewayDelete(d *schema.ResourceData, meta interface{}) func resourceComputeHaVpnGatewayImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/vpnGateways/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/vpnGateways/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_health_check.go b/google/services/compute/resource_compute_health_check.go index 96ccb4a2db2..8cfce6c534e 100644 --- a/google/services/compute/resource_compute_health_check.go +++ b/google/services/compute/resource_compute_health_check.go @@ -136,6 +136,7 @@ func ResourceComputeHealthCheck() *schema.Resource { CustomizeDiff: customdiff.All( healthCheckCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -1100,9 +1101,9 @@ func resourceComputeHealthCheckDelete(d *schema.ResourceData, meta interface{}) func resourceComputeHealthCheckImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/healthChecks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/healthChecks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_http_health_check.go b/google/services/compute/resource_compute_http_health_check.go index 98ecb4dd95b..2310c0f9528 100644 --- a/google/services/compute/resource_compute_http_health_check.go +++ b/google/services/compute/resource_compute_http_health_check.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeHttpHealthCheck() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -495,9 +500,9 @@ func resourceComputeHttpHealthCheckDelete(d *schema.ResourceData, meta interface func resourceComputeHttpHealthCheckImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/httpHealthChecks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/httpHealthChecks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_https_health_check.go b/google/services/compute/resource_compute_https_health_check.go index e3eeda14a5c..ee21ba2ac43 100644 --- a/google/services/compute/resource_compute_https_health_check.go +++ b/google/services/compute/resource_compute_https_health_check.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeHttpsHealthCheck() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -495,9 +500,9 @@ func resourceComputeHttpsHealthCheckDelete(d *schema.ResourceData, meta interfac func resourceComputeHttpsHealthCheckImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/httpsHealthChecks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/httpsHealthChecks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_image.go b/google/services/compute/resource_compute_image.go index da26bd119f8..c96cd3b9140 100644 --- a/google/services/compute/resource_compute_image.go +++ b/google/services/compute/resource_compute_image.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceComputeImage() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -127,10 +133,13 @@ account is used.`, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this Image.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this Image. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "licenses": { Type: schema.TypeList, @@ -240,12 +249,25 @@ bytes).`, Computed: true, Description: `Creation timestamp in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, Description: `The fingerprint used for optimistic locking of this resource. Used internally during updates.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -319,12 +341,6 @@ func resourceComputeImageCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("image_encryption_key"); !tpgresource.IsEmptyValue(reflect.ValueOf(imageEncryptionKeyProp)) && (ok || !reflect.DeepEqual(v, imageEncryptionKeyProp)) { obj["imageEncryptionKey"] = imageEncryptionKeyProp } - labelsProp, err := expandComputeImageLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeImageLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err @@ -367,6 +383,12 @@ func resourceComputeImageCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("source_snapshot"); !tpgresource.IsEmptyValue(reflect.ValueOf(sourceSnapshotProp)) && (ok || !reflect.DeepEqual(v, sourceSnapshotProp)) { obj["sourceSnapshot"] = sourceSnapshotProp } + labelsProp, err := expandComputeImageEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/images") if err != nil { @@ -507,6 +529,12 @@ func resourceComputeImageRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("source_snapshot", flattenComputeImageSourceSnapshot(res["sourceSnapshot"], d, config)); err != nil { return fmt.Errorf("Error reading Image: %s", err) } + if err := d.Set("terraform_labels", flattenComputeImageTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Image: %s", err) + } + if err := d.Set("effective_labels", flattenComputeImageEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Image: %s", err) + } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading Image: %s", err) } @@ -531,21 +559,21 @@ func resourceComputeImageUpdate(d *schema.ResourceData, meta interface{}) error d.Partial(true) - if d.HasChange("labels") || d.HasChange("label_fingerprint") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) - labelsProp, err := expandComputeImageLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeImageLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } + labelsProp, err := expandComputeImageEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/images/{{name}}/setLabels") if err != nil { @@ -641,9 +669,9 @@ func resourceComputeImageDelete(d *schema.ResourceData, meta interface{}) error func resourceComputeImageImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/images/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/images/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -758,7 +786,18 @@ func flattenComputeImageImageEncryptionKeyKmsKeyServiceAccount(v interface{}, d } func flattenComputeImageLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeImageLabelFingerprint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -797,6 +836,25 @@ func flattenComputeImageSourceSnapshot(v interface{}, d *schema.ResourceData, co return tpgresource.ConvertSelfLinkToV1(v.(string)) } +func flattenComputeImageTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenComputeImageEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandComputeImageDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -874,17 +932,6 @@ func expandComputeImageImageEncryptionKeyKmsKeyServiceAccount(v interface{}, d t return v, nil } -func expandComputeImageLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandComputeImageLabelFingerprint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -977,3 +1024,14 @@ func expandComputeImageSourceSnapshot(v interface{}, d tpgresource.TerraformReso } return f.RelativeLink(), nil } + +func expandComputeImageEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/compute/resource_compute_image_generated_test.go b/google/services/compute/resource_compute_image_generated_test.go index 4a2eeb82868..2356a8ce1cf 100644 --- a/google/services/compute/resource_compute_image_generated_test.go +++ b/google/services/compute/resource_compute_image_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeImage_imageBasicExample(t *testing.T) { ResourceName: "google_compute_image.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"raw_disk", "source_disk", "source_image", "source_snapshot"}, + ImportStateVerifyIgnore: []string{"raw_disk", "source_disk", "source_image", "source_snapshot", "labels", "terraform_labels"}, }, }, }) @@ -86,7 +86,7 @@ func TestAccComputeImage_imageGuestOsExample(t *testing.T) { ResourceName: "google_compute_image.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"raw_disk", "source_disk", "source_image", "source_snapshot"}, + ImportStateVerifyIgnore: []string{"raw_disk", "source_disk", "source_image", "source_snapshot", "labels", "terraform_labels"}, }, }, }) @@ -131,7 +131,7 @@ func TestAccComputeImage_imageBasicStorageLocationExample(t *testing.T) { ResourceName: "google_compute_image.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"raw_disk", "source_disk", "source_image", "source_snapshot"}, + ImportStateVerifyIgnore: []string{"raw_disk", "source_disk", "source_image", "source_snapshot", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_image_test.go b/google/services/compute/resource_compute_image_test.go index 1359c1c921a..f4c959e1622 100644 --- a/google/services/compute/resource_compute_image_test.go +++ b/google/services/compute/resource_compute_image_test.go @@ -28,9 +28,10 @@ func TestAccComputeImage_withLicense(t *testing.T) { Config: testAccComputeImage_license("image-test-" + acctest.RandString(t, 10)), }, { - ResourceName: "google_compute_image.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_image.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -54,7 +55,6 @@ func TestAccComputeImage_update(t *testing.T) { testAccCheckComputeImageExists( t, "google_compute_image.foobar", &image), testAccCheckComputeImageContainsLabel(&image, "my-label", "my-label-value"), - testAccCheckComputeImageContainsLabel(&image, "empty-label", ""), ), }, { @@ -71,7 +71,7 @@ func TestAccComputeImage_update(t *testing.T) { ResourceName: "google_compute_image.foobar", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"raw_disk"}, + ImportStateVerifyIgnore: []string{"raw_disk", "labels", "terraform_labels"}, }, }, }) @@ -367,7 +367,6 @@ resource "google_compute_image" "foobar" { } labels = { my-label = "my-label-value" - empty-label = "" } } `, name) @@ -393,7 +392,6 @@ resource "google_compute_image" "foobar" { labels = { my-label = "my-label-value" - empty-label = "" } licenses = [ "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/licenses/debian-11-bullseye", diff --git a/google/services/compute/resource_compute_instance.go b/google/services/compute/resource_compute_instance.go index ca90169bfa2..96a8dc5a159 100644 --- a/google/services/compute/resource_compute_instance.go +++ b/google/services/compute/resource_compute_instance.go @@ -603,10 +603,27 @@ func ResourceComputeInstance() *schema.Resource { }, "labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs assigned to the instance. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, - Description: `A set of key/value label pairs assigned to the instance.`, }, "metadata": { @@ -1007,6 +1024,8 @@ be from 0 to 999,999,999 inclusive.`, }, }, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, customdiff.If( func(_ context.Context, d *schema.ResourceDiff, meta interface{}) bool { return d.HasChange("guest_accelerator") @@ -1015,6 +1034,7 @@ be from 0 to 999,999,999 inclusive.`, ), desiredStatusDiff, forceNewIfNetworkIPNotUpdatable, + tpgresource.SetLabelsDiff, ), UseJSONNumber: true, } @@ -1148,7 +1168,7 @@ func expandComputeInstance(project string, d *schema.ResourceData, config *trans NetworkPerformanceConfig: networkPerformanceConfig, Tags: resourceInstanceTags(d), Params: params, - Labels: tpgresource.ExpandLabels(d), + Labels: tpgresource.ExpandEffectiveLabels(d), ServiceAccounts: expandServiceAccounts(d.Get("service_account").([]interface{})), GuestAccelerators: accels, MinCpuPlatform: d.Get("min_cpu_platform").(string), @@ -1348,7 +1368,15 @@ func resourceComputeInstanceRead(d *schema.ResourceData, meta interface{}) error } } - if err := d.Set("labels", instance.Labels); err != nil { + if err := tpgresource.SetLabels(instance.Labels, d, "labels"); err != nil { + return err + } + + if err := tpgresource.SetLabels(instance.Labels, d, "terraform_labels"); err != nil { + return err + } + + if err := d.Set("effective_labels", instance.Labels); err != nil { return err } @@ -1617,8 +1645,8 @@ func resourceComputeInstanceUpdate(d *schema.ResourceData, meta interface{}) err } } - if d.HasChange("labels") { - labels := tpgresource.ExpandLabels(d) + if d.HasChange("effective_labels") { + labels := tpgresource.ExpandEffectiveLabels(d) labelFingerprint := d.Get("label_fingerprint").(string) req := compute.InstancesSetLabelsRequest{Labels: labels, LabelFingerprint: labelFingerprint} diff --git a/google/services/compute/resource_compute_instance_from_template.go b/google/services/compute/resource_compute_instance_from_template.go index c0fe8952329..bcf293fa016 100644 --- a/google/services/compute/resource_compute_instance_from_template.go +++ b/google/services/compute/resource_compute_instance_from_template.go @@ -8,6 +8,7 @@ import ( "log" "strings" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -28,8 +29,11 @@ func ResourceComputeInstanceFromTemplate() *schema.Resource { Timeouts: ResourceComputeInstance().Timeouts, - Schema: computeInstanceFromTemplateSchema(), - CustomizeDiff: ResourceComputeInstance().CustomizeDiff, + Schema: computeInstanceFromTemplateSchema(), + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ResourceComputeInstance().CustomizeDiff, + ), UseJSONNumber: true, } } diff --git a/google/services/compute/resource_compute_instance_group.go b/google/services/compute/resource_compute_instance_group.go index 80f09d825f9..52363a24a3f 100644 --- a/google/services/compute/resource_compute_instance_group.go +++ b/google/services/compute/resource_compute_instance_group.go @@ -13,6 +13,7 @@ import ( "google.golang.org/api/googleapi" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/compute/v1" @@ -34,6 +35,11 @@ func ResourceComputeInstanceGroup() *schema.Resource { Delete: schema.DefaultTimeout(6 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + SchemaVersion: 2, MigrateState: resourceComputeInstanceGroupMigrateState, diff --git a/google/services/compute/resource_compute_instance_group_manager.go b/google/services/compute/resource_compute_instance_group_manager.go index ab9675e571d..0b69fb84aaf 100644 --- a/google/services/compute/resource_compute_instance_group_manager.go +++ b/google/services/compute/resource_compute_instance_group_manager.go @@ -9,6 +9,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -33,6 +34,10 @@ func ResourceComputeInstanceGroupManager() *schema.Resource { Update: schema.DefaultTimeout(15 * time.Minute), Delete: schema.DefaultTimeout(15 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), Schema: map[string]*schema.Schema{ "base_instance_name": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_instance_group_named_port.go b/google/services/compute/resource_compute_instance_group_named_port.go index 8852fd0b5d2..6662c87841d 100644 --- a/google/services/compute/resource_compute_instance_group_named_port.go +++ b/google/services/compute/resource_compute_instance_group_named_port.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,11 @@ func ResourceComputeInstanceGroupNamedPort() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "group": { Type: schema.TypeString, @@ -228,6 +234,14 @@ func resourceComputeInstanceGroupNamedPortRead(d *schema.ResourceData, meta inte return fmt.Errorf("Error reading InstanceGroupNamedPort: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading InstanceGroupNamedPort: %s", err) + } + if err := d.Set("name", flattenNestedComputeInstanceGroupNamedPortName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading InstanceGroupNamedPort: %s", err) } @@ -306,10 +320,10 @@ func resourceComputeInstanceGroupNamedPortDelete(d *schema.ResourceData, meta in func resourceComputeInstanceGroupNamedPortImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/instanceGroups/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/instanceGroups/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_instance_group_named_port_generated_test.go b/google/services/compute/resource_compute_instance_group_named_port_generated_test.go index 4443e96b476..9ce4d241dce 100644 --- a/google/services/compute/resource_compute_instance_group_named_port_generated_test.go +++ b/google/services/compute/resource_compute_instance_group_named_port_generated_test.go @@ -35,7 +35,8 @@ func TestAccComputeInstanceGroupNamedPort_instanceGroupNamedPortGkeExample(t *te t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -98,6 +99,7 @@ resource "google_container_cluster" "my_cluster" { cluster_ipv4_cidr_block = "/19" services_ipv4_cidr_block = "/22" } + deletion_protection = "%{deletion_protection}" } `, context) } diff --git a/google/services/compute/resource_compute_instance_template.go b/google/services/compute/resource_compute_instance_template.go index c030a53e9a4..7dabb0d0210 100644 --- a/google/services/compute/resource_compute_instance_template.go +++ b/google/services/compute/resource_compute_instance_template.go @@ -54,9 +54,11 @@ func ResourceComputeInstanceTemplate() *schema.Resource { }, SchemaVersion: 1, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, resourceComputeInstanceTemplateSourceImageCustomizeDiff, resourceComputeInstanceTemplateScratchDiskCustomizeDiff, resourceComputeInstanceTemplateBootDiskCustomizeDiff, + tpgresource.SetLabelsDiff, ), MigrateState: resourceComputeInstanceTemplateMigrateState, @@ -853,12 +855,32 @@ be from 0 to 999,999,999 inclusive.`, }, "labels": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Description: `A set of key/value label pairs to assign to instances created from this template. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { Type: schema.TypeMap, - Optional: true, - ForceNew: true, + Computed: true, + Set: schema.HashString, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, Set: schema.HashString, - Description: `A set of key/value label pairs to assign to instances created from this template,`, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "resource_policies": { @@ -1243,8 +1265,8 @@ func resourceComputeInstanceTemplateCreate(d *schema.ResourceData, meta interfac ReservationAffinity: reservationAffinity, } - if _, ok := d.GetOk("labels"); ok { - instanceProperties.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + instanceProperties.Labels = tpgresource.ExpandEffectiveLabels(d) } var itName string @@ -1567,10 +1589,16 @@ func resourceComputeInstanceTemplateRead(d *schema.ResourceData, meta interface{ } } if instanceTemplate.Properties.Labels != nil { - if err := d.Set("labels", instanceTemplate.Properties.Labels); err != nil { + if err := tpgresource.SetLabels(instanceTemplate.Properties.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } } + if err := tpgresource.SetLabels(instanceTemplate.Properties.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", instanceTemplate.Properties.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if err = d.Set("self_link", instanceTemplate.SelfLink); err != nil { return fmt.Errorf("Error setting self_link: %s", err) } diff --git a/google/services/compute/resource_compute_instance_template_test.go b/google/services/compute/resource_compute_instance_template_test.go index b641f023b4b..9f5e0bb982a 100644 --- a/google/services/compute/resource_compute_instance_template_test.go +++ b/google/services/compute/resource_compute_instance_template_test.go @@ -44,9 +44,10 @@ func TestAccComputeInstanceTemplate_basic(t *testing.T) { ), }, { - ResourceName: "google_compute_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -70,9 +71,10 @@ func TestAccComputeInstanceTemplate_imageShorthand(t *testing.T) { ), }, { - ResourceName: "google_compute_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -565,9 +567,10 @@ func TestAccComputeInstanceTemplate_EncryptKMS(t *testing.T) { ), }, { - ResourceName: "google_compute_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -874,9 +877,10 @@ func TestAccComputeInstanceTemplate_diskResourcePolicies(t *testing.T) { ), }, { - ResourceName: "google_compute_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -951,9 +955,10 @@ func TestAccComputeInstanceTemplate_managedEnvoy(t *testing.T) { ), }, { - ResourceName: "google_compute_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -1545,7 +1550,6 @@ resource "google_compute_image" "foobar" { } labels = { my-label = "my-label-value" - empty-label = "" } timeouts { create = "5m" diff --git a/google/services/compute/resource_compute_instance_test.go b/google/services/compute/resource_compute_instance_test.go index 4bfcd2953de..fac4304544e 100644 --- a/google/services/compute/resource_compute_instance_test.go +++ b/google/services/compute/resource_compute_instance_test.go @@ -136,7 +136,7 @@ func TestAccComputeInstance_basic1(t *testing.T) { testAccCheckComputeInstanceHasConfiguredDeletionProtection(&instance, false), ), }, - computeInstanceImportStep("us-central1-a", instanceName, []string{"metadata.baz", "metadata.foo", "desired_status", "current_status"}), + computeInstanceImportStep("us-central1-a", instanceName, []string{"metadata.baz", "metadata.foo", "desired_status", "current_status", "labels", "terraform_labels"}), }, }) } @@ -1423,7 +1423,7 @@ func TestAccComputeInstance_forceChangeMachineTypeManually(t *testing.T) { ), ExpectNonEmptyPlan: true, }, - computeInstanceImportStep("us-central1-a", instanceName, []string{"metadata.baz", "metadata.foo", "desired_status", "current_status"}), + computeInstanceImportStep("us-central1-a", instanceName, []string{"metadata.baz", "metadata.foo", "desired_status", "current_status", "labels", "terraform_labels"}), }, }) } @@ -5905,7 +5905,7 @@ resource "google_compute_node_group" "nodes" { name = "%s" zone = "us-central1-a" - size = 1 + initial_size = 1 node_template = google_compute_node_template.nodetmpl.self_link } `, instance, nodeTemplate, nodeGroup) @@ -5974,7 +5974,7 @@ resource "google_compute_node_group" "nodes" { name = "%s" zone = "us-central1-a" - size = 1 + initial_size = 1 node_template = google_compute_node_template.nodetmpl.self_link } `, instance, nodeTemplate, nodeGroup) @@ -6043,7 +6043,7 @@ resource "google_compute_node_group" "nodes" { name = "%s" zone = "us-central1-a" - size = 1 + initial_size = 1 node_template = google_compute_node_template.nodetmpl.self_link } `, instance, nodeTemplate, nodeGroup) @@ -6106,7 +6106,7 @@ resource "google_compute_node_group" "nodes" { name = "%s" zone = "us-central1-a" - size = 1 + initial_size = 1 node_template = google_compute_node_template.nodetmpl.self_link } `, instance, nodeTemplate, nodeGroup) diff --git a/google/services/compute/resource_compute_interconnect_attachment.go b/google/services/compute/resource_compute_interconnect_attachment.go index 54477f31ea3..a598554f228 100644 --- a/google/services/compute/resource_compute_interconnect_attachment.go +++ b/google/services/compute/resource_compute_interconnect_attachment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -67,6 +68,10 @@ func ResourceComputeInterconnectAttachment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -706,10 +711,10 @@ func resourceComputeInterconnectAttachmentDelete(d *schema.ResourceData, meta in func resourceComputeInterconnectAttachmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/interconnectAttachments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/interconnectAttachments/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_managed_ssl_certificate.go b/google/services/compute/resource_compute_managed_ssl_certificate.go index f9e8256a444..586927cea66 100644 --- a/google/services/compute/resource_compute_managed_ssl_certificate.go +++ b/google/services/compute/resource_compute_managed_ssl_certificate.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeManagedSslCertificate() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "description": { Type: schema.TypeString, @@ -353,9 +358,9 @@ func resourceComputeManagedSslCertificateDelete(d *schema.ResourceData, meta int func resourceComputeManagedSslCertificateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/sslCertificates/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/sslCertificates/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_network.go b/google/services/compute/resource_compute_network.go index f3f2efcf7b6..7b37692115e 100644 --- a/google/services/compute/resource_compute_network.go +++ b/google/services/compute/resource_compute_network.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -49,6 +50,10 @@ func ResourceComputeNetwork() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -520,9 +525,9 @@ func resourceComputeNetworkDelete(d *schema.ResourceData, meta interface{}) erro func resourceComputeNetworkImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/networks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/networks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_network_endpoint.go b/google/services/compute/resource_compute_network_endpoint.go index f1c6abae7d2..07fab331bd4 100644 --- a/google/services/compute/resource_compute_network_endpoint.go +++ b/google/services/compute/resource_compute_network_endpoint.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,11 @@ func ResourceComputeNetworkEndpoint() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "ip_address": { Type: schema.TypeString, @@ -253,6 +259,14 @@ func resourceComputeNetworkEndpointRead(d *schema.ResourceData, meta interface{} return fmt.Errorf("Error reading NetworkEndpoint: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading NetworkEndpoint: %s", err) + } + if err := d.Set("instance", flattenNestedComputeNetworkEndpointInstance(res["instance"], d, config)); err != nil { return fmt.Errorf("Error reading NetworkEndpoint: %s", err) } diff --git a/google/services/compute/resource_compute_network_endpoint_group.go b/google/services/compute/resource_compute_network_endpoint_group.go index 1d8b9ca1f32..2576a585d43 100644 --- a/google/services/compute/resource_compute_network_endpoint_group.go +++ b/google/services/compute/resource_compute_network_endpoint_group.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,11 @@ func ResourceComputeNetworkEndpointGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -277,6 +283,14 @@ func resourceComputeNetworkEndpointGroupRead(d *schema.ResourceData, meta interf return fmt.Errorf("Error reading NetworkEndpointGroup: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading NetworkEndpointGroup: %s", err) + } + if err := d.Set("name", flattenComputeNetworkEndpointGroupName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading NetworkEndpointGroup: %s", err) } @@ -298,9 +312,6 @@ func resourceComputeNetworkEndpointGroupRead(d *schema.ResourceData, meta interf if err := d.Set("default_port", flattenComputeNetworkEndpointGroupDefaultPort(res["defaultPort"], d, config)); err != nil { return fmt.Errorf("Error reading NetworkEndpointGroup: %s", err) } - if err := d.Set("zone", flattenComputeNetworkEndpointGroupZone(res["zone"], d, config)); err != nil { - return fmt.Errorf("Error reading NetworkEndpointGroup: %s", err) - } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading NetworkEndpointGroup: %s", err) } @@ -364,10 +375,10 @@ func resourceComputeNetworkEndpointGroupDelete(d *schema.ResourceData, meta inte func resourceComputeNetworkEndpointGroupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/networkEndpointGroups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/networkEndpointGroups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -442,13 +453,6 @@ func flattenComputeNetworkEndpointGroupDefaultPort(v interface{}, d *schema.Reso return v // let terraform core handle it otherwise } -func flattenComputeNetworkEndpointGroupZone(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - return tpgresource.ConvertSelfLinkToV1(v.(string)) -} - func expandComputeNetworkEndpointGroupName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google/services/compute/resource_compute_network_endpoints.go b/google/services/compute/resource_compute_network_endpoints.go index f49eda68e4d..7f9789ef24c 100644 --- a/google/services/compute/resource_compute_network_endpoints.go +++ b/google/services/compute/resource_compute_network_endpoints.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -146,6 +147,11 @@ func ResourceComputeNetworkEndpoints() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "network_endpoint_group": { Type: schema.TypeString, @@ -349,6 +355,14 @@ func resourceComputeNetworkEndpointsRead(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error reading NetworkEndpoints: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading NetworkEndpoints: %s", err) + } + if err := d.Set("network_endpoints", flattenComputeNetworkEndpointsNetworkEndpoints(res["networkEndpoints"], d, config)); err != nil { return fmt.Errorf("Error reading NetworkEndpoints: %s", err) } @@ -595,10 +609,10 @@ func resourceComputeNetworkEndpointsDelete(d *schema.ResourceData, meta interfac func resourceComputeNetworkEndpointsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/networkEndpointGroups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/networkEndpointGroups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_network_firewall_policy.go b/google/services/compute/resource_compute_network_firewall_policy.go index b1b98974894..c96d6c7f07e 100644 --- a/google/services/compute/resource_compute_network_firewall_policy.go +++ b/google/services/compute/resource_compute_network_firewall_policy.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceComputeNetworkFirewallPolicy() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), Schema: map[string]*schema.Schema{ "name": { diff --git a/google/services/compute/resource_compute_network_firewall_policy_association.go b/google/services/compute/resource_compute_network_firewall_policy_association.go index 41a81c8420d..083d3e6a996 100644 --- a/google/services/compute/resource_compute_network_firewall_policy_association.go +++ b/google/services/compute/resource_compute_network_firewall_policy_association.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,10 @@ func ResourceComputeNetworkFirewallPolicyAssociation() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), Schema: map[string]*schema.Schema{ "attachment_target": { diff --git a/google/services/compute/resource_compute_network_firewall_policy_rule.go b/google/services/compute/resource_compute_network_firewall_policy_rule.go index a4b4ead6785..da22f393841 100644 --- a/google/services/compute/resource_compute_network_firewall_policy_rule.go +++ b/google/services/compute/resource_compute_network_firewall_policy_rule.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceComputeNetworkFirewallPolicyRule() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), Schema: map[string]*schema.Schema{ "action": { diff --git a/google/services/compute/resource_compute_network_peering_routes_config.go b/google/services/compute/resource_compute_network_peering_routes_config.go index 5b140770573..6715285321d 100644 --- a/google/services/compute/resource_compute_network_peering_routes_config.go +++ b/google/services/compute/resource_compute_network_peering_routes_config.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeNetworkPeeringRoutesConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "export_custom_routes": { Type: schema.TypeBool, @@ -335,9 +340,9 @@ func resourceComputeNetworkPeeringRoutesConfigDelete(d *schema.ResourceData, met func resourceComputeNetworkPeeringRoutesConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/networks/(?P[^/]+)/networkPeerings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/global/networks/(?P[^/]+)/networkPeerings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_network_peering_routes_config_generated_test.go b/google/services/compute/resource_compute_network_peering_routes_config_generated_test.go index cdfc5b18a7f..82ac6686f96 100644 --- a/google/services/compute/resource_compute_network_peering_routes_config_generated_test.go +++ b/google/services/compute/resource_compute_network_peering_routes_config_generated_test.go @@ -90,7 +90,8 @@ func TestAccComputeNetworkPeeringRoutesConfig_networkPeeringRoutesConfigGkeExamp t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -163,6 +164,7 @@ resource "google_container_cluster" "private_cluster" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = "%{deletion_protection}" } `, context) } diff --git a/google/services/compute/resource_compute_node_group.go b/google/services/compute/resource_compute_node_group.go index e795b4a34f5..3d45a858c9a 100644 --- a/google/services/compute/resource_compute_node_group.go +++ b/google/services/compute/resource_compute_node_group.go @@ -18,12 +18,15 @@ package compute import ( + "errors" "fmt" "log" "reflect" "regexp" + "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +51,10 @@ func ResourceComputeNodeGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "node_template": { Type: schema.TypeString, @@ -59,9 +66,10 @@ func ResourceComputeNodeGroup() *schema.Resource { Type: schema.TypeList, Computed: true, Optional: true, - ForceNew: true, Description: `If you use sole-tenant nodes for your workloads, you can use the node -group autoscaler to automatically manage the sizes of your node groups.`, +group autoscaler to automatically manage the sizes of your node groups. + +One of 'initial_size' or 'autoscaling_policy' must be configured on resource creation.`, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -69,7 +77,6 @@ group autoscaler to automatically manage the sizes of your node groups.`, Type: schema.TypeInt, Computed: true, Optional: true, - ForceNew: true, Description: `Maximum size of the node group. Set to a value less than or equal to 100 and greater than or equal to min-nodes.`, }, @@ -77,7 +84,6 @@ to 100 and greater than or equal to min-nodes.`, Type: schema.TypeString, Computed: true, Optional: true, - ForceNew: true, ValidateFunc: verify.ValidateEnum([]string{"OFF", "ON", "ONLY_SCALE_OUT"}), Description: `The autoscaling mode. Set to one of the following: - OFF: Disables the autoscaler. @@ -90,7 +96,6 @@ to 100 and greater than or equal to min-nodes.`, Type: schema.TypeInt, Computed: true, Optional: true, - ForceNew: true, Description: `Minimum size of the node group. Must be less than or equal to max-nodes. The default value is 0.`, }, @@ -100,27 +105,22 @@ than or equal to max-nodes. The default value is 0.`, "description": { Type: schema.TypeString, Optional: true, - ForceNew: true, Description: `An optional textual description of the resource.`, }, "initial_size": { - Type: schema.TypeInt, - Optional: true, - ForceNew: true, - Description: `The initial number of nodes in the node group. One of 'initial_size' or 'size' must be specified.`, - ExactlyOneOf: []string{"size", "initial_size"}, + Type: schema.TypeInt, + Optional: true, + Description: `The initial number of nodes in the node group. One of 'initial_size' or 'autoscaling_policy' must be configured on resource creation.`, }, "maintenance_policy": { Type: schema.TypeString, Optional: true, - ForceNew: true, Description: `Specifies how to handle instances when a node in the group undergoes maintenance. Set to one of: DEFAULT, RESTART_IN_PLACE, or MIGRATE_WITHIN_NODE_GROUP. The default value is DEFAULT.`, Default: "DEFAULT", }, "maintenance_window": { Type: schema.TypeList, Optional: true, - ForceNew: true, Description: `contains properties for the timeframe of maintenance`, MaxItems: 1, Elem: &schema.Resource{ @@ -128,7 +128,6 @@ than or equal to max-nodes. The default value is 0.`, "start_time": { Type: schema.TypeString, Required: true, - ForceNew: true, Description: `instances.start time of the window. This must be in UTC format that resolves to one of 00:00, 04:00, 08:00, 12:00, 16:00, or 20:00. For example, both 13:00-5 and 08:00 are valid.`, }, }, @@ -137,14 +136,12 @@ than or equal to max-nodes. The default value is 0.`, "name": { Type: schema.TypeString, Optional: true, - ForceNew: true, Description: `Name of the resource.`, }, "share_settings": { Type: schema.TypeList, Computed: true, Optional: true, - ForceNew: true, Description: `Share settings for the node group.`, MaxItems: 1, Elem: &schema.Resource{ @@ -152,26 +149,22 @@ than or equal to max-nodes. The default value is 0.`, "share_type": { Type: schema.TypeString, Required: true, - ForceNew: true, ValidateFunc: verify.ValidateEnum([]string{"ORGANIZATION", "SPECIFIC_PROJECTS", "LOCAL"}), Description: `Node group sharing type. Possible values: ["ORGANIZATION", "SPECIFIC_PROJECTS", "LOCAL"]`, }, "project_map": { Type: schema.TypeSet, Optional: true, - ForceNew: true, Description: `A map of project id and project config. This is only valid when shareType's value is SPECIFIC_PROJECTS.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "id": { Type: schema.TypeString, Required: true, - ForceNew: true, }, "project_id": { Type: schema.TypeString, Required: true, - ForceNew: true, Description: `The project id/number should be the same as the key of this project config in the project map.`, }, }, @@ -180,19 +173,10 @@ than or equal to max-nodes. The default value is 0.`, }, }, }, - "size": { - Type: schema.TypeInt, - Computed: true, - Optional: true, - ForceNew: true, - Description: `The total number of nodes in the node group. One of 'initial_size' or 'size' must be specified.`, - ExactlyOneOf: []string{"size", "initial_size"}, - }, "zone": { Type: schema.TypeString, Computed: true, Optional: true, - ForceNew: true, DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, Description: `Zone where this node group is located`, }, @@ -201,6 +185,11 @@ than or equal to max-nodes. The default value is 0.`, Computed: true, Description: `Creation timestamp in RFC3339 text format.`, }, + "size": { + Type: schema.TypeInt, + Computed: true, + Description: `The total number of nodes in the node group.`, + }, "project": { Type: schema.TypeString, Optional: true, @@ -242,12 +231,6 @@ func resourceComputeNodeGroupCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("node_template"); !tpgresource.IsEmptyValue(reflect.ValueOf(nodeTemplateProp)) && (ok || !reflect.DeepEqual(v, nodeTemplateProp)) { obj["nodeTemplate"] = nodeTemplateProp } - sizeProp, err := expandComputeNodeGroupSize(d.Get("size"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("size"); ok || !reflect.DeepEqual(v, sizeProp) { - obj["size"] = sizeProp - } maintenancePolicyProp, err := expandComputeNodeGroupMaintenancePolicy(d.Get("maintenance_policy"), d, config) if err != nil { return err @@ -299,10 +282,14 @@ func resourceComputeNodeGroupCreate(d *schema.ResourceData, meta interface{}) er } var sizeParam string - if v, ok := d.GetOkExists("size"); ok { - sizeParam = fmt.Sprintf("%v", v) - } else if v, ok := d.GetOkExists("initial_size"); ok { + if v, ok := d.GetOkExists("initial_size"); ok { sizeParam = fmt.Sprintf("%v", v) + } else { + if _, ok := d.GetOkExists("autoscaling_policy"); ok { + sizeParam = fmt.Sprintf("%v", d.Get("autoscaling_policy.min_nodes")) + } else { + return errors.New("An initial_size or autoscaling_policy must be configured on node group creation.") + } } url = regexp.MustCompile("PRE_CREATE_REPLACE_ME").ReplaceAllLiteralString(url, sizeParam) @@ -433,6 +420,123 @@ func resourceComputeNodeGroupUpdate(d *schema.ResourceData, meta interface{}) er } billingProject = project + obj := make(map[string]interface{}) + descriptionProp, err := expandComputeNodeGroupDescription(d.Get("description"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { + obj["description"] = descriptionProp + } + nameProp, err := expandComputeNodeGroupName(d.Get("name"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, nameProp)) { + obj["name"] = nameProp + } + maintenancePolicyProp, err := expandComputeNodeGroupMaintenancePolicy(d.Get("maintenance_policy"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("maintenance_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, maintenancePolicyProp)) { + obj["maintenancePolicy"] = maintenancePolicyProp + } + maintenanceWindowProp, err := expandComputeNodeGroupMaintenanceWindow(d.Get("maintenance_window"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("maintenance_window"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, maintenanceWindowProp)) { + obj["maintenanceWindow"] = maintenanceWindowProp + } + autoscalingPolicyProp, err := expandComputeNodeGroupAutoscalingPolicy(d.Get("autoscaling_policy"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("autoscaling_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, autoscalingPolicyProp)) { + obj["autoscalingPolicy"] = autoscalingPolicyProp + } + shareSettingsProp, err := expandComputeNodeGroupShareSettings(d.Get("share_settings"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("share_settings"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, shareSettingsProp)) { + obj["shareSettings"] = shareSettingsProp + } + zoneProp, err := expandComputeNodeGroupZone(d.Get("zone"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("zone"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, zoneProp)) { + obj["zone"] = zoneProp + } + + url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/zones/{{zone}}/nodeGroups/{{name}}") + if err != nil { + return err + } + + log.Printf("[DEBUG] Updating NodeGroup %q: %#v", d.Id(), obj) + updateMask := []string{} + + if d.HasChange("description") { + updateMask = append(updateMask, "description") + } + + if d.HasChange("name") { + updateMask = append(updateMask, "name") + } + + if d.HasChange("maintenance_policy") { + updateMask = append(updateMask, "maintenancePolicy") + } + + if d.HasChange("maintenance_window") { + updateMask = append(updateMask, "maintenanceWindow") + } + + if d.HasChange("autoscaling_policy") { + updateMask = append(updateMask, "autoscalingPolicy") + } + + if d.HasChange("share_settings") { + updateMask = append(updateMask, "shareSettings") + } + + if d.HasChange("zone") { + updateMask = append(updateMask, "zone") + } + // updateMask is a URL parameter but not present in the schema, so ReplaceVars + // won't set it + url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) + if err != nil { + return err + } + + // err == nil indicates that the billing_project value was found + if bp, err := tpgresource.GetBillingProject(d, config); err == nil { + billingProject = bp + } + + // if updateMask is empty we are not updating anything so skip the post + if len(updateMask) > 0 { + res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ + Config: config, + Method: "PATCH", + Project: billingProject, + RawURL: url, + UserAgent: userAgent, + Body: obj, + Timeout: d.Timeout(schema.TimeoutUpdate), + }) + + if err != nil { + return fmt.Errorf("Error updating NodeGroup %q: %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Finished updating NodeGroup %q: %#v", d.Id(), res) + } + + err = ComputeOperationWaitTime( + config, res, project, "Updating NodeGroup", userAgent, + d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return err + } + } d.Partial(true) if d.HasChange("node_template") { @@ -539,10 +643,10 @@ func resourceComputeNodeGroupDelete(d *schema.ResourceData, meta interface{}) er func resourceComputeNodeGroupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/nodeGroups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/nodeGroups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -730,10 +834,6 @@ func expandComputeNodeGroupNodeTemplate(v interface{}, d tpgresource.TerraformRe return f.RelativeLink(), nil } -func expandComputeNodeGroupSize(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - func expandComputeNodeGroupMaintenancePolicy(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google/services/compute/resource_compute_node_group_generated_test.go b/google/services/compute/resource_compute_node_group_generated_test.go index 585042c8c47..12e2e5f5ca6 100644 --- a/google/services/compute/resource_compute_node_group_generated_test.go +++ b/google/services/compute/resource_compute_node_group_generated_test.go @@ -69,7 +69,7 @@ resource "google_compute_node_group" "nodes" { zone = "us-central1-f" description = "example google_compute_node_group for Terraform Google Provider" - size = 1 + initial_size = 1 node_template = google_compute_node_template.soletenant-tmpl.id } `, context) @@ -172,7 +172,7 @@ resource "google_compute_node_group" "nodes" { zone = "us-central1-f" description = "example google_compute_node_group for Terraform Google Provider" - size = 1 + initial_size = 1 node_template = google_compute_node_template.soletenant-tmpl.id share_settings { diff --git a/google/services/compute/resource_compute_node_group_test.go b/google/services/compute/resource_compute_node_group_test.go index a9d4708c128..9951667495a 100644 --- a/google/services/compute/resource_compute_node_group_test.go +++ b/google/services/compute/resource_compute_node_group_test.go @@ -9,12 +9,14 @@ import ( "strings" "time" + "regexp" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-google/google/acctest" ) -func TestAccComputeNodeGroup_updateNodeTemplate(t *testing.T) { +func TestAccComputeNodeGroup_update(t *testing.T) { t.Parallel() groupName := fmt.Sprintf("group--%d", acctest.RandInt(t)) @@ -27,26 +29,47 @@ func TestAccComputeNodeGroup_updateNodeTemplate(t *testing.T) { CheckDestroy: testAccCheckComputeNodeGroupDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccComputeNodeGroup_updateNodeTemplate(groupName, tmplPrefix, "tmpl1"), + Config: testAccComputeNodeGroup_update(groupName, tmplPrefix, "tmpl1"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNodeGroupCreationTimeBefore(&timeCreated), ), }, { - ResourceName: "google_compute_node_group.nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_node_group.nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"initial_size"}, }, { - Config: testAccComputeNodeGroup_updateNodeTemplate(groupName, tmplPrefix, "tmpl2"), + Config: testAccComputeNodeGroup_update2(groupName, tmplPrefix, "tmpl2"), Check: resource.ComposeTestCheckFunc( testAccCheckComputeNodeGroupCreationTimeBefore(&timeCreated), ), }, { - ResourceName: "google_compute_node_group.nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_node_group.nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"initial_size"}, + }, + }, + }) +} + +func TestAccComputeNodeGroup_fail(t *testing.T) { + t.Parallel() + + groupName := fmt.Sprintf("group--%d", acctest.RandInt(t)) + tmplPrefix := fmt.Sprintf("tmpl--%d", acctest.RandInt(t)) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckComputeNodeGroupDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccComputeNodeGroup_fail(groupName, tmplPrefix, "tmpl1"), + ExpectError: regexp.MustCompile("An initial_size or autoscaling_policy must be configured on node group creation."), }, }, }) @@ -87,7 +110,7 @@ func testAccCheckComputeNodeGroupCreationTimeBefore(prevTimeCreated *time.Time) } } -func testAccComputeNodeGroup_updateNodeTemplate(groupName, tmplPrefix, tmplToUse string) string { +func testAccComputeNodeGroup_update(groupName, tmplPrefix, tmplToUse string) string { return fmt.Sprintf(` resource "google_compute_node_template" "tmpl1" { name = "%s-first" @@ -106,8 +129,58 @@ resource "google_compute_node_group" "nodes" { zone = "us-central1-a" description = "example google_compute_node_group for Terraform Google Provider" - size = 0 + initial_size = 1 node_template = google_compute_node_template.%s.self_link } + `, tmplPrefix, tmplPrefix, groupName, tmplToUse) } + +func testAccComputeNodeGroup_update2(groupName, tmplPrefix, tmplToUse string) string { + return fmt.Sprintf(` +resource "google_compute_node_template" "tmpl1" { + name = "%s-first" + region = "us-central1" + node_type = "n1-node-96-624" +} + +resource "google_compute_node_template" "tmpl2" { + name = "%s-second" + region = "us-central1" + node_type = "n1-node-96-624" +} + +resource "google_compute_node_group" "nodes" { + name = "%s" + zone = "us-central1-a" + description = "example google_compute_node_group for Terraform Google Provider" + + autoscaling_policy { + mode = "ONLY_SCALE_OUT" + min_nodes = 1 + max_nodes = 10 + } + node_template = google_compute_node_template.%s.self_link +} + +`, tmplPrefix, tmplPrefix, groupName, tmplToUse) +} + +func testAccComputeNodeGroup_fail(groupName, tmplPrefix, tmplToUse string) string { + return fmt.Sprintf(` +resource "google_compute_node_template" "tmpl1" { + name = "%s-first" + region = "us-central1" + node_type = "n1-node-96-624" +} + +resource "google_compute_node_group" "nodes" { + name = "%s" + zone = "us-central1-a" + description = "example google_compute_node_group for Terraform Google Provider" + + node_template = google_compute_node_template.%s.self_link +} + +`, tmplPrefix, groupName, tmplToUse) +} diff --git a/google/services/compute/resource_compute_node_template.go b/google/services/compute/resource_compute_node_template.go index 6a7dd66802a..5bbe68b6124 100644 --- a/google/services/compute/resource_compute_node_template.go +++ b/google/services/compute/resource_compute_node_template.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeNodeTemplate() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "cpu_overcommit_type": { Type: schema.TypeString, @@ -417,10 +422,10 @@ func resourceComputeNodeTemplateDelete(d *schema.ResourceData, meta interface{}) func resourceComputeNodeTemplateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/nodeTemplates/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/nodeTemplates/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_packet_mirroring.go b/google/services/compute/resource_compute_packet_mirroring.go index 1dd0a2e6702..d5a82b52eba 100644 --- a/google/services/compute/resource_compute_packet_mirroring.go +++ b/google/services/compute/resource_compute_packet_mirroring.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceComputePacketMirroring() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "collector_ilb": { Type: schema.TypeList, @@ -537,10 +542,10 @@ func resourceComputePacketMirroringDelete(d *schema.ResourceData, meta interface func resourceComputePacketMirroringImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/packetMirrorings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/packetMirrorings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_per_instance_config.go b/google/services/compute/resource_compute_per_instance_config.go index 14f4ca9e855..27830da4687 100644 --- a/google/services/compute/resource_compute_per_instance_config.go +++ b/google/services/compute/resource_compute_per_instance_config.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceComputePerInstanceConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderZone, + ), + Schema: map[string]*schema.Schema{ "instance_group_manager": { Type: schema.TypeString, @@ -86,6 +92,7 @@ func ResourceComputePerInstanceConfig() *schema.Resource { }, "zone": { Type: schema.TypeString, + Computed: true, Optional: true, ForceNew: true, DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, @@ -323,6 +330,14 @@ func resourceComputePerInstanceConfigRead(d *schema.ResourceData, meta interface return fmt.Errorf("Error reading PerInstanceConfig: %s", err) } + zone, err := tpgresource.GetZone(d, config) + if err != nil { + return err + } + if err := d.Set("zone", zone); err != nil { + return fmt.Errorf("Error reading PerInstanceConfig: %s", err) + } + if err := d.Set("name", flattenNestedComputePerInstanceConfigName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading PerInstanceConfig: %s", err) } @@ -565,10 +580,10 @@ func resourceComputePerInstanceConfigDelete(d *schema.ResourceData, meta interfa func resourceComputePerInstanceConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/instanceGroupManagers/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/instanceGroupManagers/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_project_default_network_tier.go b/google/services/compute/resource_compute_project_default_network_tier.go index f59bd43d1c6..e172c1e0fdf 100644 --- a/google/services/compute/resource_compute_project_default_network_tier.go +++ b/google/services/compute/resource_compute_project_default_network_tier.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/compute/v1" @@ -31,6 +32,10 @@ func ResourceComputeProjectDefaultNetworkTier() *schema.Resource { Create: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + SchemaVersion: 0, Schema: map[string]*schema.Schema{ diff --git a/google/services/compute/resource_compute_project_metadata.go b/google/services/compute/resource_compute_project_metadata.go index 577d5dee93f..2f9dbfeeb55 100644 --- a/google/services/compute/resource_compute_project_metadata.go +++ b/google/services/compute/resource_compute_project_metadata.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/compute/v1" @@ -30,6 +31,10 @@ func ResourceComputeProjectMetadata() *schema.Resource { Delete: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + SchemaVersion: 0, Schema: map[string]*schema.Schema{ diff --git a/google/services/compute/resource_compute_project_metadata_item.go b/google/services/compute/resource_compute_project_metadata_item.go index 04ce3253c11..d85ee38d42e 100644 --- a/google/services/compute/resource_compute_project_metadata_item.go +++ b/google/services/compute/resource_compute_project_metadata_item.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/compute/v1" @@ -32,6 +33,10 @@ func ResourceComputeProjectMetadataItem() *schema.Resource { State: schema.ImportStatePassthrough, }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "key": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_public_advertised_prefix.go b/google/services/compute/resource_compute_public_advertised_prefix.go index 6b7b53fe75b..bd6aaa7f04f 100644 --- a/google/services/compute/resource_compute_public_advertised_prefix.go +++ b/google/services/compute/resource_compute_public_advertised_prefix.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceComputePublicAdvertisedPrefix() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dns_verification_ip": { Type: schema.TypeString, @@ -291,9 +296,9 @@ func resourceComputePublicAdvertisedPrefixDelete(d *schema.ResourceData, meta in func resourceComputePublicAdvertisedPrefixImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/publicAdvertisedPrefixes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/publicAdvertisedPrefixes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_public_delegated_prefix.go b/google/services/compute/resource_compute_public_delegated_prefix.go index 0f217ff5d0c..2f01c092400 100644 --- a/google/services/compute/resource_compute_public_delegated_prefix.go +++ b/google/services/compute/resource_compute_public_delegated_prefix.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceComputePublicDelegatedPrefix() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "ip_cidr_range": { Type: schema.TypeString, @@ -313,10 +318,10 @@ func resourceComputePublicDelegatedPrefixDelete(d *schema.ResourceData, meta int func resourceComputePublicDelegatedPrefixImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/publicDelegatedPrefixes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/publicDelegatedPrefixes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_autoscaler.go b/google/services/compute/resource_compute_region_autoscaler.go index b5b0d2ba246..7e9ee486f9d 100644 --- a/google/services/compute/resource_compute_region_autoscaler.go +++ b/google/services/compute/resource_compute_region_autoscaler.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceComputeRegionAutoscaler() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "autoscaling_policy": { Type: schema.TypeList, @@ -465,6 +471,14 @@ func resourceComputeRegionAutoscalerRead(d *schema.ResourceData, meta interface{ return fmt.Errorf("Error reading RegionAutoscaler: %s", err) } + region, err := tpgresource.GetRegion(d, config) + if err != nil { + return err + } + if err := d.Set("region", region); err != nil { + return fmt.Errorf("Error reading RegionAutoscaler: %s", err) + } + if err := d.Set("creation_timestamp", flattenComputeRegionAutoscalerCreationTimestamp(res["creationTimestamp"], d, config)); err != nil { return fmt.Errorf("Error reading RegionAutoscaler: %s", err) } @@ -480,9 +494,6 @@ func resourceComputeRegionAutoscalerRead(d *schema.ResourceData, meta interface{ if err := d.Set("target", flattenComputeRegionAutoscalerTarget(res["target"], d, config)); err != nil { return fmt.Errorf("Error reading RegionAutoscaler: %s", err) } - if err := d.Set("region", flattenComputeRegionAutoscalerRegion(res["region"], d, config)); err != nil { - return fmt.Errorf("Error reading RegionAutoscaler: %s", err) - } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading RegionAutoscaler: %s", err) } @@ -632,10 +643,10 @@ func resourceComputeRegionAutoscalerDelete(d *schema.ResourceData, meta interfac func resourceComputeRegionAutoscalerImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/autoscalers/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/autoscalers/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -977,13 +988,6 @@ func flattenComputeRegionAutoscalerTarget(v interface{}, d *schema.ResourceData, return v } -func flattenComputeRegionAutoscalerRegion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - return tpgresource.ConvertSelfLinkToV1(v.(string)) -} - func expandComputeRegionAutoscalerName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google/services/compute/resource_compute_region_backend_service.go b/google/services/compute/resource_compute_region_backend_service.go index dc2e95e0fa5..da60e124cee 100644 --- a/google/services/compute/resource_compute_region_backend_service.go +++ b/google/services/compute/resource_compute_region_backend_service.go @@ -144,6 +144,7 @@ func ResourceComputeRegionBackendService() *schema.Resource { MigrateState: tpgresource.MigrateStateNoop, CustomizeDiff: customdiff.All( customDiffRegionBackendService, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -1654,10 +1655,10 @@ func resourceComputeRegionBackendServiceDelete(d *schema.ResourceData, meta inte func resourceComputeRegionBackendServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/backendServices/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/backendServices/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_commitment.go b/google/services/compute/resource_compute_region_commitment.go index 5f2bb08b95e..f7493d3adf6 100644 --- a/google/services/compute/resource_compute_region_commitment.go +++ b/google/services/compute/resource_compute_region_commitment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeRegionCommitment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -440,10 +445,10 @@ func resourceComputeRegionCommitmentDelete(d *schema.ResourceData, meta interfac func resourceComputeRegionCommitmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/commitments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/commitments/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_disk.go b/google/services/compute/resource_compute_region_disk.go index 5e4e0f8007a..0415186a800 100644 --- a/google/services/compute/resource_compute_region_disk.go +++ b/google/services/compute/resource_compute_region_disk.go @@ -53,6 +53,8 @@ func ResourceComputeRegionDisk() *schema.Resource { CustomizeDiff: customdiff.All( customdiff.ForceNewIfChange("size", IsDiskShrinkage), hyperDiskIopsUpdateDiffSupress, + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -158,10 +160,14 @@ Applicable only for bootable disks.`, // Default schema.HashSchema is used. }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this disk. A list of key->value pairs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this disk. A list of key->value pairs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "licenses": { Type: schema.TypeList, @@ -275,6 +281,12 @@ create the disk. Provide this when creating the disk.`, Computed: true, Description: `Creation timestamp in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, @@ -308,6 +320,13 @@ that was later deleted and recreated under the same name, the source snapshot ID would identify the exact version of the snapshot that was used.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "users": { Type: schema.TypeList, Computed: true, @@ -367,12 +386,6 @@ func resourceComputeRegionDiskCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandComputeRegionDiskLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } nameProp, err := expandComputeRegionDiskName(d.Get("name"), d, config) if err != nil { return err @@ -427,6 +440,12 @@ func resourceComputeRegionDiskCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("licenses"); !tpgresource.IsEmptyValue(reflect.ValueOf(licensesProp)) && (ok || !reflect.DeepEqual(v, licensesProp)) { obj["licenses"] = licensesProp } + labelsProp, err := expandComputeRegionDiskEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } regionProp, err := expandComputeRegionDiskRegion(d.Get("region"), d, config) if err != nil { return err @@ -614,6 +633,12 @@ func resourceComputeRegionDiskRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("licenses", flattenComputeRegionDiskLicenses(res["licenses"], d, config)); err != nil { return fmt.Errorf("Error reading RegionDisk: %s", err) } + if err := d.Set("terraform_labels", flattenComputeRegionDiskTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading RegionDisk: %s", err) + } + if err := d.Set("effective_labels", flattenComputeRegionDiskEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading RegionDisk: %s", err) + } if err := d.Set("region", flattenComputeRegionDiskRegion(res["region"], d, config)); err != nil { return fmt.Errorf("Error reading RegionDisk: %s", err) } @@ -653,7 +678,7 @@ func resourceComputeRegionDiskUpdate(d *schema.ResourceData, meta interface{}) e d.Partial(true) - if d.HasChange("label_fingerprint") || d.HasChange("labels") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) labelFingerprintProp, err := expandComputeRegionDiskLabelFingerprint(d.Get("label_fingerprint"), d, config) @@ -662,10 +687,10 @@ func resourceComputeRegionDiskUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } - labelsProp, err := expandComputeRegionDiskLabels(d.Get("labels"), d, config) + labelsProp, err := expandComputeRegionDiskEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -865,10 +890,10 @@ func resourceComputeRegionDiskDelete(d *schema.ResourceData, meta interface{}) e func resourceComputeRegionDiskImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/disks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/disks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -904,7 +929,18 @@ func flattenComputeRegionDiskLastDetachTimestamp(v interface{}, d *schema.Resour } func flattenComputeRegionDiskLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeRegionDiskName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1020,6 +1056,25 @@ func flattenComputeRegionDiskLicenses(v interface{}, d *schema.ResourceData, con return tpgresource.ConvertAndMapStringArr(v.([]interface{}), tpgresource.ConvertSelfLinkToV1) } +func flattenComputeRegionDiskTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenComputeRegionDiskEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenComputeRegionDiskRegion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -1098,17 +1153,6 @@ func expandComputeRegionDiskDescription(v interface{}, d tpgresource.TerraformRe return v, nil } -func expandComputeRegionDiskLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandComputeRegionDiskName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1215,6 +1259,17 @@ func expandComputeRegionDiskLicenses(v interface{}, d tpgresource.TerraformResou return req, nil } +func expandComputeRegionDiskEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func expandComputeRegionDiskRegion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { f, err := tpgresource.ParseGlobalFieldValue("regions", v.(string), "project", d, config, true) if err != nil { diff --git a/google/services/compute/resource_compute_region_disk_generated_test.go b/google/services/compute/resource_compute_region_disk_generated_test.go index 1b3934443d1..8e3761d766c 100644 --- a/google/services/compute/resource_compute_region_disk_generated_test.go +++ b/google/services/compute/resource_compute_region_disk_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeRegionDisk_regionDiskBasicExample(t *testing.T) { ResourceName: "google_compute_region_disk.regiondisk", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"type", "region", "snapshot"}, + ImportStateVerifyIgnore: []string{"type", "region", "snapshot", "labels", "terraform_labels"}, }, }, }) @@ -102,7 +102,7 @@ func TestAccComputeRegionDisk_regionDiskAsyncExample(t *testing.T) { ResourceName: "google_compute_region_disk.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"type", "region", "snapshot"}, + ImportStateVerifyIgnore: []string{"type", "region", "snapshot", "labels", "terraform_labels"}, }, }, }) @@ -153,7 +153,7 @@ func TestAccComputeRegionDisk_regionDiskFeaturesExample(t *testing.T) { ResourceName: "google_compute_region_disk.regiondisk", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"type", "region", "snapshot"}, + ImportStateVerifyIgnore: []string{"type", "region", "snapshot", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_region_disk_resource_policy_attachment.go b/google/services/compute/resource_compute_region_disk_resource_policy_attachment.go index 8a2fa6420c5..ccca1ea9fab 100644 --- a/google/services/compute/resource_compute_region_disk_resource_policy_attachment.go +++ b/google/services/compute/resource_compute_region_disk_resource_policy_attachment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceComputeRegionDiskResourcePolicyAttachment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "disk": { Type: schema.TypeString, @@ -295,10 +300,10 @@ func resourceComputeRegionDiskResourcePolicyAttachmentDelete(d *schema.ResourceD func resourceComputeRegionDiskResourcePolicyAttachmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/disks/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/disks/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_disk_test.go b/google/services/compute/resource_compute_region_disk_test.go index 78bd218f4ea..85d0e76c86f 100644 --- a/google/services/compute/resource_compute_region_disk_test.go +++ b/google/services/compute/resource_compute_region_disk_test.go @@ -37,9 +37,10 @@ func TestAccComputeRegionDisk_basic(t *testing.T) { ), }, { - ResourceName: "google_compute_region_disk.regiondisk", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_disk.regiondisk", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccComputeRegionDisk_basic(diskName, "name"), @@ -49,9 +50,10 @@ func TestAccComputeRegionDisk_basic(t *testing.T) { ), }, { - ResourceName: "google_compute_region_disk.regiondisk", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_disk.regiondisk", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -77,9 +79,10 @@ func TestAccComputeRegionDisk_basicUpdate(t *testing.T) { ), }, { - ResourceName: "google_compute_region_disk.regiondisk", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_disk.regiondisk", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccComputeRegionDisk_basicUpdated(diskName, "self_link"), @@ -93,9 +96,10 @@ func TestAccComputeRegionDisk_basicUpdate(t *testing.T) { ), }, { - ResourceName: "google_compute_region_disk.regiondisk", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_disk.regiondisk", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_region_health_check.go b/google/services/compute/resource_compute_region_health_check.go index a41b7eb4a40..c89f942000d 100644 --- a/google/services/compute/resource_compute_region_health_check.go +++ b/google/services/compute/resource_compute_region_health_check.go @@ -50,6 +50,7 @@ func ResourceComputeRegionHealthCheck() *schema.Resource { CustomizeDiff: customdiff.All( healthCheckCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -1040,10 +1041,10 @@ func resourceComputeRegionHealthCheckDelete(d *schema.ResourceData, meta interfa func resourceComputeRegionHealthCheckImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/healthChecks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/healthChecks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_instance_group_manager.go b/google/services/compute/resource_compute_region_instance_group_manager.go index 39b204f4359..6dff02c2900 100644 --- a/google/services/compute/resource_compute_region_instance_group_manager.go +++ b/google/services/compute/resource_compute_region_instance_group_manager.go @@ -9,6 +9,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -34,6 +35,11 @@ func ResourceComputeRegionInstanceGroupManager() *schema.Resource { Delete: schema.DefaultTimeout(15 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "base_instance_name": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_region_instance_template.go b/google/services/compute/resource_compute_region_instance_template.go index caa77753284..64e06c5814a 100644 --- a/google/services/compute/resource_compute_region_instance_template.go +++ b/google/services/compute/resource_compute_region_instance_template.go @@ -30,9 +30,12 @@ func ResourceComputeRegionInstanceTemplate() *schema.Resource { }, SchemaVersion: 1, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, resourceComputeInstanceTemplateSourceImageCustomizeDiff, resourceComputeInstanceTemplateScratchDiskCustomizeDiff, resourceComputeInstanceTemplateBootDiskCustomizeDiff, + tpgresource.SetLabelsDiff, ), Timeouts: &schema.ResourceTimeout{ @@ -823,12 +826,32 @@ be from 0 to 999,999,999 inclusive.`, }, "labels": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + Description: `A set of key/value label pairs to assign to instances created from this template, + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { Type: schema.TypeMap, - Optional: true, - ForceNew: true, + Computed: true, + Set: schema.HashString, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, Set: schema.HashString, - Description: `A set of key/value label pairs to assign to instances created from this template,`, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "resource_policies": { @@ -958,8 +981,8 @@ func resourceComputeRegionInstanceTemplateCreate(d *schema.ResourceData, meta in ReservationAffinity: reservationAffinity, } - if _, ok := d.GetOk("labels"); ok { - instanceProperties.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + instanceProperties.Labels = tpgresource.ExpandEffectiveLabels(d) } var itName string @@ -1085,10 +1108,16 @@ func resourceComputeRegionInstanceTemplateRead(d *schema.ResourceData, meta inte } } if instanceProperties.Labels != nil { - if err := d.Set("labels", instanceProperties.Labels); err != nil { + if err := tpgresource.SetLabels(instanceProperties.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } } + if err := tpgresource.SetLabels(instanceProperties.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", instanceProperties.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if err = d.Set("self_link", instanceTemplate["selfLink"]); err != nil { return fmt.Errorf("Error setting self_link: %s", err) } diff --git a/google/services/compute/resource_compute_region_instance_template_test.go b/google/services/compute/resource_compute_region_instance_template_test.go index dbfee11163a..d28f5071502 100644 --- a/google/services/compute/resource_compute_region_instance_template_test.go +++ b/google/services/compute/resource_compute_region_instance_template_test.go @@ -43,9 +43,10 @@ func TestAccComputeRegionInstanceTemplate_basic(t *testing.T) { ), }, { - ResourceName: "google_compute_region_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -69,9 +70,10 @@ func TestAccComputeRegionInstanceTemplate_imageShorthand(t *testing.T) { ), }, { - ResourceName: "google_compute_region_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -827,9 +829,10 @@ func TestAccComputeRegionInstanceTemplate_diskResourcePolicies(t *testing.T) { ), }, { - ResourceName: "google_compute_region_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -904,9 +907,10 @@ func TestAccComputeRegionInstanceTemplate_managedEnvoy(t *testing.T) { ), }, { - ResourceName: "google_compute_region_instance_template.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_compute_region_instance_template.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -1532,7 +1536,6 @@ resource "google_compute_image" "foobar" { } labels = { my-label = "my-label-value" - empty-label = "" } timeouts { create = "5m" diff --git a/google/services/compute/resource_compute_region_network_endpoint_group.go b/google/services/compute/resource_compute_region_network_endpoint_group.go index 73d12c8df83..12f31800ac5 100644 --- a/google/services/compute/resource_compute_region_network_endpoint_group.go +++ b/google/services/compute/resource_compute_region_network_endpoint_group.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeRegionNetworkEndpointGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -498,10 +503,10 @@ func resourceComputeRegionNetworkEndpointGroupDelete(d *schema.ResourceData, met func resourceComputeRegionNetworkEndpointGroupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/networkEndpointGroups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/networkEndpointGroups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_network_firewall_policy.go b/google/services/compute/resource_compute_region_network_firewall_policy.go index 4a597ffa2aa..eab7ec8d6f9 100644 --- a/google/services/compute/resource_compute_region_network_firewall_policy.go +++ b/google/services/compute/resource_compute_region_network_firewall_policy.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceComputeRegionNetworkFirewallPolicy() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), Schema: map[string]*schema.Schema{ "name": { diff --git a/google/services/compute/resource_compute_region_network_firewall_policy_association.go b/google/services/compute/resource_compute_region_network_firewall_policy_association.go index 7c18fb6ecbc..b5b728befb1 100644 --- a/google/services/compute/resource_compute_region_network_firewall_policy_association.go +++ b/google/services/compute/resource_compute_region_network_firewall_policy_association.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,10 @@ func ResourceComputeRegionNetworkFirewallPolicyAssociation() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), Schema: map[string]*schema.Schema{ "attachment_target": { diff --git a/google/services/compute/resource_compute_region_network_firewall_policy_rule.go b/google/services/compute/resource_compute_region_network_firewall_policy_rule.go index 036329f79dc..37f769e9abc 100644 --- a/google/services/compute/resource_compute_region_network_firewall_policy_rule.go +++ b/google/services/compute/resource_compute_region_network_firewall_policy_rule.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceComputeRegionNetworkFirewallPolicyRule() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), Schema: map[string]*schema.Schema{ "action": { diff --git a/google/services/compute/resource_compute_region_per_instance_config.go b/google/services/compute/resource_compute_region_per_instance_config.go index 1b6149f681d..e9f72a1ab2a 100644 --- a/google/services/compute/resource_compute_region_per_instance_config.go +++ b/google/services/compute/resource_compute_region_per_instance_config.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceComputeRegionPerInstanceConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -575,10 +581,10 @@ func resourceComputeRegionPerInstanceConfigDelete(d *schema.ResourceData, meta i func resourceComputeRegionPerInstanceConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/instanceGroupManagers/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/instanceGroupManagers/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_ssl_certificate.go b/google/services/compute/resource_compute_region_ssl_certificate.go index 90fed441b92..08b7e8db1a7 100644 --- a/google/services/compute/resource_compute_region_ssl_certificate.go +++ b/google/services/compute/resource_compute_region_ssl_certificate.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -46,6 +47,10 @@ func ResourceComputeRegionSslCertificate() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "certificate": { Type: schema.TypeString, @@ -361,10 +366,10 @@ func resourceComputeRegionSslCertificateDelete(d *schema.ResourceData, meta inte func resourceComputeRegionSslCertificateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/sslCertificates/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/sslCertificates/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_ssl_policy.go b/google/services/compute/resource_compute_region_ssl_policy.go index 2a4e939709b..1fa12acb6f1 100644 --- a/google/services/compute/resource_compute_region_ssl_policy.go +++ b/google/services/compute/resource_compute_region_ssl_policy.go @@ -68,6 +68,7 @@ func ResourceComputeRegionSslPolicy() *schema.Resource { CustomizeDiff: customdiff.All( regionSslPolicyCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -484,10 +485,10 @@ func resourceComputeRegionSslPolicyDelete(d *schema.ResourceData, meta interface func resourceComputeRegionSslPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/sslPolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/sslPolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_target_http_proxy.go b/google/services/compute/resource_compute_region_target_http_proxy.go index 0c1657ff0e3..c1565466d64 100644 --- a/google/services/compute/resource_compute_region_target_http_proxy.go +++ b/google/services/compute/resource_compute_region_target_http_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeRegionTargetHttpProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -379,10 +384,10 @@ func resourceComputeRegionTargetHttpProxyDelete(d *schema.ResourceData, meta int func resourceComputeRegionTargetHttpProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/targetHttpProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/targetHttpProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_target_https_proxy.go b/google/services/compute/resource_compute_region_target_https_proxy.go index dcbf9ab5d35..35595b984b7 100644 --- a/google/services/compute/resource_compute_region_target_https_proxy.go +++ b/google/services/compute/resource_compute_region_target_https_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeRegionTargetHttpsProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -459,10 +464,10 @@ func resourceComputeRegionTargetHttpsProxyDelete(d *schema.ResourceData, meta in func resourceComputeRegionTargetHttpsProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/targetHttpsProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/targetHttpsProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_target_tcp_proxy.go b/google/services/compute/resource_compute_region_target_tcp_proxy.go index e259ced6ca3..4e8e535065b 100644 --- a/google/services/compute/resource_compute_region_target_tcp_proxy.go +++ b/google/services/compute/resource_compute_region_target_tcp_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeRegionTargetTcpProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backend_service": { Type: schema.TypeString, @@ -348,10 +353,10 @@ func resourceComputeRegionTargetTcpProxyDelete(d *schema.ResourceData, meta inte func resourceComputeRegionTargetTcpProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/targetTcpProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/targetTcpProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_region_url_map.go b/google/services/compute/resource_compute_region_url_map.go index 2237ee3071c..d1e94017051 100644 --- a/google/services/compute/resource_compute_region_url_map.go +++ b/google/services/compute/resource_compute_region_url_map.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -48,6 +49,10 @@ func ResourceComputeRegionUrlMap() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -2563,10 +2568,10 @@ func resourceComputeRegionUrlMapDelete(d *schema.ResourceData, meta interface{}) func resourceComputeRegionUrlMapImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/urlMaps/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/urlMaps/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_reservation.go b/google/services/compute/resource_compute_reservation.go index a3f3f2c556c..b3eb9a2a714 100644 --- a/google/services/compute/resource_compute_reservation.go +++ b/google/services/compute/resource_compute_reservation.go @@ -26,6 +26,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -51,6 +52,10 @@ func ResourceComputeReservation() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -634,10 +639,10 @@ func resourceComputeReservationDelete(d *schema.ResourceData, meta interface{}) func resourceComputeReservationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/reservations/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/reservations/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_resource_policy.go b/google/services/compute/resource_compute_resource_policy.go index d910151fa5a..6cfd731ec31 100644 --- a/google/services/compute/resource_compute_resource_policy.go +++ b/google/services/compute/resource_compute_resource_policy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,11 @@ func ResourceComputeResourcePolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -548,6 +554,14 @@ func resourceComputeResourcePolicyRead(d *schema.ResourceData, meta interface{}) return fmt.Errorf("Error reading ResourcePolicy: %s", err) } + region, err := tpgresource.GetRegion(d, config) + if err != nil { + return err + } + if err := d.Set("region", region); err != nil { + return fmt.Errorf("Error reading ResourcePolicy: %s", err) + } + if err := d.Set("name", flattenComputeResourcePolicyName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading ResourcePolicy: %s", err) } @@ -566,9 +580,6 @@ func resourceComputeResourcePolicyRead(d *schema.ResourceData, meta interface{}) if err := d.Set("disk_consistency_group_policy", flattenComputeResourcePolicyDiskConsistencyGroupPolicy(res["diskConsistencyGroupPolicy"], d, config)); err != nil { return fmt.Errorf("Error reading ResourcePolicy: %s", err) } - if err := d.Set("region", flattenComputeResourcePolicyRegion(res["region"], d, config)); err != nil { - return fmt.Errorf("Error reading ResourcePolicy: %s", err) - } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading ResourcePolicy: %s", err) } @@ -632,10 +643,10 @@ func resourceComputeResourcePolicyDelete(d *schema.ResourceData, meta interface{ func resourceComputeResourcePolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/resourcePolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/resourcePolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1009,13 +1020,6 @@ func flattenComputeResourcePolicyDiskConsistencyGroupPolicy(v interface{}, d *sc return []interface{}{transformed} } -func flattenComputeResourcePolicyRegion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - return tpgresource.ConvertSelfLinkToV1(v.(string)) -} - func expandComputeResourcePolicyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google/services/compute/resource_compute_route.go b/google/services/compute/resource_compute_route.go index 991ce0f1127..a9b807e51d1 100644 --- a/google/services/compute/resource_compute_route.go +++ b/google/services/compute/resource_compute_route.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceComputeRoute() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dest_range": { Type: schema.TypeString, @@ -493,9 +498,9 @@ func resourceComputeRouteDelete(d *schema.ResourceData, meta interface{}) error func resourceComputeRouteImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/routes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/routes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_router.go b/google/services/compute/resource_compute_router.go index 158858592c5..2c566df60db 100644 --- a/google/services/compute/resource_compute_router.go +++ b/google/services/compute/resource_compute_router.go @@ -69,6 +69,7 @@ func ResourceComputeRouter() *schema.Resource { CustomizeDiff: customdiff.All( resourceComputeRouterCustomDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -522,10 +523,10 @@ func resourceComputeRouterDelete(d *schema.ResourceData, meta interface{}) error func resourceComputeRouterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/routers/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/routers/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_router_interface.go b/google/services/compute/resource_compute_router_interface.go index 07bd2977277..a2b504821d8 100644 --- a/google/services/compute/resource_compute_router_interface.go +++ b/google/services/compute/resource_compute_router_interface.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/googleapi" @@ -32,6 +33,11 @@ func ResourceComputeRouterInterface() *schema.Resource { Delete: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_router_nat.go b/google/services/compute/resource_compute_router_nat.go index afb41353282..099eec71de7 100644 --- a/google/services/compute/resource_compute_router_nat.go +++ b/google/services/compute/resource_compute_router_nat.go @@ -185,6 +185,7 @@ func ResourceComputeRouterNat() *schema.Resource { CustomizeDiff: customdiff.All( resourceComputeRouterNatDrainNatIpsCustomDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -243,10 +244,10 @@ Mutually exclusive with enableEndpointIndependentMapping.`, }, "enable_endpoint_independent_mapping": { Type: schema.TypeBool, + Computed: true, Optional: true, - Description: `Specifies if endpoint independent mapping is enabled. This is enabled by default. For more information -see the [official documentation](https://cloud.google.com/nat/docs/overview#specs-rfcs).`, - Default: true, + Description: `Enable endpoint independent mapping. +For more information see the [official documentation](https://cloud.google.com/nat/docs/overview#specs-rfcs).`, }, "icmp_idle_timeout_sec": { Type: schema.TypeInt, @@ -991,10 +992,10 @@ func resourceComputeRouterNatDelete(d *schema.ResourceData, meta interface{}) er func resourceComputeRouterNatImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/routers/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/routers/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_router_peer.go b/google/services/compute/resource_compute_router_peer.go index 3e9c36aa2ae..dc26bf2ca86 100644 --- a/google/services/compute/resource_compute_router_peer.go +++ b/google/services/compute/resource_compute_router_peer.go @@ -26,6 +26,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -64,6 +65,10 @@ func ResourceComputeRouterBgpPeer() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "interface": { Type: schema.TypeString, @@ -780,10 +785,10 @@ func resourceComputeRouterBgpPeerDelete(d *schema.ResourceData, meta interface{} func resourceComputeRouterBgpPeerImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/routers/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/routers/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_security_policy.go b/google/services/compute/resource_compute_security_policy.go index 8871952f2d7..887454211aa 100644 --- a/google/services/compute/resource_compute_security_policy.go +++ b/google/services/compute/resource_compute_security_policy.go @@ -10,6 +10,7 @@ import ( "time" "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -29,7 +30,10 @@ func ResourceComputeSecurityPolicy() *schema.Resource { Importer: &schema.ResourceImporter{ State: resourceSecurityPolicyStateImporter, }, - CustomizeDiff: rulesCustomizeDiff, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + rulesCustomizeDiff, + ), Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(8 * time.Minute), @@ -212,7 +216,6 @@ func ResourceComputeSecurityPolicy() *schema.Resource { "enforce_on_key": { Type: schema.TypeString, Optional: true, - Default: "ALL", Description: `Determines the key to enforce the rateLimitThreshold on`, ValidateFunc: validation.StringInSlice([]string{"ALL", "IP", "HTTP_HEADER", "XFF_IP", "HTTP_COOKIE", "HTTP_PATH", "SNI", "REGION_CODE", ""}, false), }, diff --git a/google/services/compute/resource_compute_service_attachment.go b/google/services/compute/resource_compute_service_attachment.go index 11ba0808028..76b574f1d9e 100644 --- a/google/services/compute/resource_compute_service_attachment.go +++ b/google/services/compute/resource_compute_service_attachment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,11 @@ func ResourceComputeServiceAttachment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "connection_preference": { Type: schema.TypeString, @@ -89,25 +95,12 @@ except the last character, which cannot be a dash.`, this service attachment.`, }, "consumer_accept_lists": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, Description: `An array of projects that are allowed to connect to this service attachment.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "connection_limit": { - Type: schema.TypeInt, - Required: true, - Description: `The number of consumer forwarding rules the consumer project can -create.`, - }, - "project_id_or_num": { - Type: schema.TypeString, - Required: true, - Description: `A project that is allowed to connect to this service attachment.`, - }, - }, - }, + Elem: computeServiceAttachmentConsumerAcceptListsSchema(), + // Default schema.HashSchema is used. }, "consumer_reject_lists": { Type: schema.TypeList, @@ -137,14 +130,12 @@ supported is 1.`, }, "reconcile_connections": { Type: schema.TypeBool, + Computed: true, Optional: true, Description: `This flag determines whether a consumer accept/reject list change can reconcile the statuses of existing ACCEPTED or REJECTED PSC endpoints. If false, connection policy update will only affect existing PENDING PSC endpoints. Existing ACCEPTED/REJECTED endpoints will remain untouched regardless how the connection policy is modified . -If true, update will affect both PENDING and ACCEPTED/REJECTED PSC endpoints. For example, an ACCEPTED PSC endpoint will be moved to REJECTED if its project is added to the reject list. - -For newly created service attachment, this boolean defaults to true.`, - Default: true, +If true, update will affect both PENDING and ACCEPTED/REJECTED PSC endpoints. For example, an ACCEPTED PSC endpoint will be moved to REJECTED if its project is added to the reject list.`, }, "region": { Type: schema.TypeString, @@ -196,6 +187,24 @@ updates of this resource.`, } } +func computeServiceAttachmentConsumerAcceptListsSchema() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "connection_limit": { + Type: schema.TypeInt, + Required: true, + Description: `The number of consumer forwarding rules the consumer project can +create.`, + }, + "project_id_or_num": { + Type: schema.TypeString, + Required: true, + Description: `A project that is allowed to connect to this service attachment.`, + }, + }, + } +} + func resourceComputeServiceAttachmentCreate(d *schema.ResourceData, meta interface{}) error { config := meta.(*transport_tpg.Config) userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) @@ -371,6 +380,14 @@ func resourceComputeServiceAttachmentRead(d *schema.ResourceData, meta interface return fmt.Errorf("Error reading ServiceAttachment: %s", err) } + region, err := tpgresource.GetRegion(d, config) + if err != nil { + return err + } + if err := d.Set("region", region); err != nil { + return fmt.Errorf("Error reading ServiceAttachment: %s", err) + } + if err := d.Set("name", flattenComputeServiceAttachmentName(res["name"], d, config)); err != nil { return fmt.Errorf("Error reading ServiceAttachment: %s", err) } @@ -407,9 +424,6 @@ func resourceComputeServiceAttachmentRead(d *schema.ResourceData, meta interface if err := d.Set("reconcile_connections", flattenComputeServiceAttachmentReconcileConnections(res["reconcileConnections"], d, config)); err != nil { return fmt.Errorf("Error reading ServiceAttachment: %s", err) } - if err := d.Set("region", flattenComputeServiceAttachmentRegion(res["region"], d, config)); err != nil { - return fmt.Errorf("Error reading ServiceAttachment: %s", err) - } if err := d.Set("self_link", tpgresource.ConvertSelfLinkToV1(res["selfLink"].(string))); err != nil { return fmt.Errorf("Error reading ServiceAttachment: %s", err) } @@ -582,10 +596,10 @@ func resourceComputeServiceAttachmentDelete(d *schema.ResourceData, meta interfa func resourceComputeServiceAttachmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/serviceAttachments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/serviceAttachments/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -674,14 +688,14 @@ func flattenComputeServiceAttachmentConsumerAcceptLists(v interface{}, d *schema return v } l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) + transformed := schema.NewSet(schema.HashResource(computeServiceAttachmentConsumerAcceptListsSchema()), []interface{}{}) for _, raw := range l { original := raw.(map[string]interface{}) if len(original) < 1 { // Do not include empty json objects coming back from the api continue } - transformed = append(transformed, map[string]interface{}{ + transformed.Add(map[string]interface{}{ "project_id_or_num": flattenComputeServiceAttachmentConsumerAcceptListsProjectIdOrNum(original["projectIdOrNum"], d, config), "connection_limit": flattenComputeServiceAttachmentConsumerAcceptListsConnectionLimit(original["connectionLimit"], d, config), }) @@ -713,13 +727,6 @@ func flattenComputeServiceAttachmentReconcileConnections(v interface{}, d *schem return v } -func flattenComputeServiceAttachmentRegion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - return tpgresource.ConvertSelfLinkToV1(v.(string)) -} - func expandComputeServiceAttachmentName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -773,6 +780,7 @@ func expandComputeServiceAttachmentConsumerRejectLists(v interface{}, d tpgresou } func expandComputeServiceAttachmentConsumerAcceptLists(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + v = v.(*schema.Set).List() l := v.([]interface{}) req := make([]interface{}, 0, len(l)) for _, raw := range l { diff --git a/google/services/compute/resource_compute_shared_vpc_service_project.go b/google/services/compute/resource_compute_shared_vpc_service_project.go index e9e9a8139ec..03ee10617b8 100644 --- a/google/services/compute/resource_compute_shared_vpc_service_project.go +++ b/google/services/compute/resource_compute_shared_vpc_service_project.go @@ -11,6 +11,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "google.golang.org/api/googleapi" @@ -33,6 +34,10 @@ func ResourceComputeSharedVpcServiceProject() *schema.Resource { Delete: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "host_project": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_snapshot.go b/google/services/compute/resource_compute_snapshot.go index bcecc857397..3e1cf47b2c0 100644 --- a/google/services/compute/resource_compute_snapshot.go +++ b/google/services/compute/resource_compute_snapshot.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceComputeSnapshot() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -86,10 +92,13 @@ resource, this field is visible only if it has a non-empty value.`, Description: `An optional description of this resource.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this Snapshot.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this Snapshot. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "snapshot_encryption_key": { Type: schema.TypeList, @@ -197,6 +206,12 @@ RFC 4648 base64 to either encrypt or decrypt this resource.`, Computed: true, Description: `Size of the snapshot, specified in GB.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "label_fingerprint": { Type: schema.TypeString, Computed: true, @@ -227,6 +242,13 @@ snapshot using a customer-supplied encryption key.`, storage, this number is expected to change with snapshot creation/deletion.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -274,18 +296,18 @@ func resourceComputeSnapshotCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("storage_locations"); !tpgresource.IsEmptyValue(reflect.ValueOf(storageLocationsProp)) && (ok || !reflect.DeepEqual(v, storageLocationsProp)) { obj["storageLocations"] = storageLocationsProp } - labelsProp, err := expandComputeSnapshotLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeSnapshotLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelFingerprintProp)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } + labelsProp, err := expandComputeSnapshotEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } sourceDiskProp, err := expandComputeSnapshotSourceDisk(d.Get("source_disk"), d, config) if err != nil { return err @@ -451,6 +473,12 @@ func resourceComputeSnapshotRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("label_fingerprint", flattenComputeSnapshotLabelFingerprint(res["labelFingerprint"], d, config)); err != nil { return fmt.Errorf("Error reading Snapshot: %s", err) } + if err := d.Set("terraform_labels", flattenComputeSnapshotTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Snapshot: %s", err) + } + if err := d.Set("effective_labels", flattenComputeSnapshotEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Snapshot: %s", err) + } if err := d.Set("source_disk", flattenComputeSnapshotSourceDisk(res["sourceDisk"], d, config)); err != nil { return fmt.Errorf("Error reading Snapshot: %s", err) } @@ -481,21 +509,21 @@ func resourceComputeSnapshotUpdate(d *schema.ResourceData, meta interface{}) err d.Partial(true) - if d.HasChange("labels") || d.HasChange("label_fingerprint") { + if d.HasChange("label_fingerprint") || d.HasChange("effective_labels") { obj := make(map[string]interface{}) - labelsProp, err := expandComputeSnapshotLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } labelFingerprintProp, err := expandComputeSnapshotLabelFingerprint(d.Get("label_fingerprint"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("label_fingerprint"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelFingerprintProp)) { obj["labelFingerprint"] = labelFingerprintProp } + labelsProp, err := expandComputeSnapshotEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ComputeBasePath}}projects/{{project}}/global/snapshots/{{name}}/setLabels") if err != nil { @@ -591,9 +619,9 @@ func resourceComputeSnapshotDelete(d *schema.ResourceData, meta interface{}) err func resourceComputeSnapshotImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/snapshots/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/snapshots/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -687,13 +715,43 @@ func flattenComputeSnapshotLicenses(v interface{}, d *schema.ResourceData, confi } func flattenComputeSnapshotLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenComputeSnapshotLabelFingerprint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } +func flattenComputeSnapshotTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenComputeSnapshotEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenComputeSnapshotSourceDisk(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v @@ -752,7 +810,11 @@ func expandComputeSnapshotStorageLocations(v interface{}, d tpgresource.Terrafor return v, nil } -func expandComputeSnapshotLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandComputeSnapshotLabelFingerprint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandComputeSnapshotEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -763,10 +825,6 @@ func expandComputeSnapshotLabels(v interface{}, d tpgresource.TerraformResourceD return m, nil } -func expandComputeSnapshotLabelFingerprint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - func expandComputeSnapshotSourceDisk(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { f, err := tpgresource.ParseZonalFieldValue("disks", v.(string), "project", "zone", d, config, true) if err != nil { diff --git a/google/services/compute/resource_compute_snapshot_generated_test.go b/google/services/compute/resource_compute_snapshot_generated_test.go index 932c5202ce0..48a3c14be0e 100644 --- a/google/services/compute/resource_compute_snapshot_generated_test.go +++ b/google/services/compute/resource_compute_snapshot_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeSnapshot_snapshotBasicExample(t *testing.T) { ResourceName: "google_compute_snapshot.snapshot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"source_disk", "zone", "source_disk_encryption_key"}, + ImportStateVerifyIgnore: []string{"source_disk", "zone", "source_disk_encryption_key", "labels", "terraform_labels"}, }, }, }) @@ -101,7 +101,7 @@ func TestAccComputeSnapshot_snapshotChainnameExample(t *testing.T) { ResourceName: "google_compute_snapshot.snapshot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"source_disk", "zone", "source_disk_encryption_key"}, + ImportStateVerifyIgnore: []string{"source_disk", "zone", "source_disk_encryption_key", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_ssl_certificate.go b/google/services/compute/resource_compute_ssl_certificate.go index 1b1af789e54..c7c989a9868 100644 --- a/google/services/compute/resource_compute_ssl_certificate.go +++ b/google/services/compute/resource_compute_ssl_certificate.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -46,6 +47,10 @@ func ResourceComputeSslCertificate() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "certificate": { Type: schema.TypeString, @@ -343,9 +348,9 @@ func resourceComputeSslCertificateDelete(d *schema.ResourceData, meta interface{ func resourceComputeSslCertificateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/sslCertificates/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/sslCertificates/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_ssl_policy.go b/google/services/compute/resource_compute_ssl_policy.go index ca49d2bbbdf..78810be92e8 100644 --- a/google/services/compute/resource_compute_ssl_policy.go +++ b/google/services/compute/resource_compute_ssl_policy.go @@ -72,6 +72,7 @@ func ResourceComputeSslPolicy() *schema.Resource { CustomizeDiff: customdiff.All( sslPolicyCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -467,9 +468,9 @@ func resourceComputeSslPolicyDelete(d *schema.ResourceData, meta interface{}) er func resourceComputeSslPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/sslPolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/sslPolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_subnetwork.go b/google/services/compute/resource_compute_subnetwork.go index 6f2c5200264..1a5c670f736 100644 --- a/google/services/compute/resource_compute_subnetwork.go +++ b/google/services/compute/resource_compute_subnetwork.go @@ -74,6 +74,7 @@ func ResourceComputeSubnetwork() *schema.Resource { CustomizeDiff: customdiff.All( resourceComputeSubnetworkSecondaryIpRangeSetStyleDiff, customdiff.ForceNewIfChange("ip_cidr_range", IsShrinkageIpCidr), + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -1043,10 +1044,10 @@ func resourceComputeSubnetworkDelete(d *schema.ResourceData, meta interface{}) e func resourceComputeSubnetworkImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/subnetworks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/subnetworks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_target_grpc_proxy.go b/google/services/compute/resource_compute_target_grpc_proxy.go index 70925ed1c0a..0a61efb591e 100644 --- a/google/services/compute/resource_compute_target_grpc_proxy.go +++ b/google/services/compute/resource_compute_target_grpc_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeTargetGrpcProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -409,9 +414,9 @@ func resourceComputeTargetGrpcProxyDelete(d *schema.ResourceData, meta interface func resourceComputeTargetGrpcProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/targetGrpcProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/targetGrpcProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_target_http_proxy.go b/google/services/compute/resource_compute_target_http_proxy.go index 329f2b93760..f264198afbb 100644 --- a/google/services/compute/resource_compute_target_http_proxy.go +++ b/google/services/compute/resource_compute_target_http_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeTargetHttpProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -398,9 +403,9 @@ func resourceComputeTargetHttpProxyDelete(d *schema.ResourceData, meta interface func resourceComputeTargetHttpProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/targetHttpProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/targetHttpProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_target_https_proxy.go b/google/services/compute/resource_compute_target_https_proxy.go index 1ba46376ea7..d1a40467336 100644 --- a/google/services/compute/resource_compute_target_https_proxy.go +++ b/google/services/compute/resource_compute_target_https_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceComputeTargetHttpsProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -662,9 +667,9 @@ func resourceComputeTargetHttpsProxyDelete(d *schema.ResourceData, meta interfac func resourceComputeTargetHttpsProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/targetHttpsProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/targetHttpsProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_target_instance.go b/google/services/compute/resource_compute_target_instance.go index d6be8a5fede..4b3741d9c7c 100644 --- a/google/services/compute/resource_compute_target_instance.go +++ b/google/services/compute/resource_compute_target_instance.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceComputeTargetInstance() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "instance": { Type: schema.TypeString, @@ -328,10 +333,10 @@ func resourceComputeTargetInstanceDelete(d *schema.ResourceData, meta interface{ func resourceComputeTargetInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/zones/(?P[^/]+)/targetInstances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/zones/(?P[^/]+)/targetInstances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_target_pool.go b/google/services/compute/resource_compute_target_pool.go index 0700a844a87..9fb589ebaf7 100644 --- a/google/services/compute/resource_compute_target_pool.go +++ b/google/services/compute/resource_compute_target_pool.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/googleapi" @@ -36,6 +37,11 @@ func ResourceComputeTargetPool() *schema.Resource { Delete: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, diff --git a/google/services/compute/resource_compute_target_ssl_proxy.go b/google/services/compute/resource_compute_target_ssl_proxy.go index e4c27e3c130..897486ee153 100644 --- a/google/services/compute/resource_compute_target_ssl_proxy.go +++ b/google/services/compute/resource_compute_target_ssl_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceComputeTargetSslProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backend_service": { Type: schema.TypeString, @@ -601,9 +606,9 @@ func resourceComputeTargetSslProxyDelete(d *schema.ResourceData, meta interface{ func resourceComputeTargetSslProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/targetSslProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/targetSslProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_target_tcp_proxy.go b/google/services/compute/resource_compute_target_tcp_proxy.go index e9288dc409a..10d1a3fac6d 100644 --- a/google/services/compute/resource_compute_target_tcp_proxy.go +++ b/google/services/compute/resource_compute_target_tcp_proxy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceComputeTargetTcpProxy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backend_service": { Type: schema.TypeString, @@ -437,9 +442,9 @@ func resourceComputeTargetTcpProxyDelete(d *schema.ResourceData, meta interface{ func resourceComputeTargetTcpProxyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/targetTcpProxies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/targetTcpProxies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_url_map.go b/google/services/compute/resource_compute_url_map.go index fb10faa6659..34e4d5a8d83 100644 --- a/google/services/compute/resource_compute_url_map.go +++ b/google/services/compute/resource_compute_url_map.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -49,6 +50,10 @@ func ResourceComputeUrlMap() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -3173,9 +3178,9 @@ func resourceComputeUrlMapDelete(d *schema.ResourceData, meta interface{}) error func resourceComputeUrlMapImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/global/urlMaps/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/global/urlMaps/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_vpn_gateway.go b/google/services/compute/resource_compute_vpn_gateway.go index 2da48a6b844..b5433dad4c1 100644 --- a/google/services/compute/resource_compute_vpn_gateway.go +++ b/google/services/compute/resource_compute_vpn_gateway.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceComputeVpnGateway() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -311,10 +316,10 @@ func resourceComputeVpnGatewayDelete(d *schema.ResourceData, meta interface{}) e func resourceComputeVpnGatewayImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/targetVpnGateways/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/targetVpnGateways/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_vpn_tunnel.go b/google/services/compute/resource_compute_vpn_tunnel.go index 0aa8191fb31..32f77999509 100644 --- a/google/services/compute/resource_compute_vpn_tunnel.go +++ b/google/services/compute/resource_compute_vpn_tunnel.go @@ -26,6 +26,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -148,6 +149,11 @@ func ResourceComputeVpnTunnel() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -633,10 +639,10 @@ func resourceComputeVpnTunnelDelete(d *schema.ResourceData, meta interface{}) er func resourceComputeVpnTunnelImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/regions/(?P[^/]+)/vpnTunnels/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/regions/(?P[^/]+)/vpnTunnels/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/compute/resource_compute_vpn_tunnel_generated_test.go b/google/services/compute/resource_compute_vpn_tunnel_generated_test.go index 1ddcb2e5244..33e1774a3a7 100644 --- a/google/services/compute/resource_compute_vpn_tunnel_generated_test.go +++ b/google/services/compute/resource_compute_vpn_tunnel_generated_test.go @@ -49,7 +49,7 @@ func TestAccComputeVpnTunnel_vpnTunnelBasicExample(t *testing.T) { ResourceName: "google_compute_vpn_tunnel.tunnel1", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"target_vpn_gateway", "vpn_gateway", "peer_external_gateway", "peer_gcp_gateway", "router", "shared_secret", "region"}, + ImportStateVerifyIgnore: []string{"target_vpn_gateway", "vpn_gateway", "peer_external_gateway", "peer_gcp_gateway", "router", "shared_secret", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/compute/resource_compute_vpn_tunnel_test.go b/google/services/compute/resource_compute_vpn_tunnel_test.go index e79a0228c4a..e6cfa97a4e5 100644 --- a/google/services/compute/resource_compute_vpn_tunnel_test.go +++ b/google/services/compute/resource_compute_vpn_tunnel_test.go @@ -33,7 +33,6 @@ func TestAccComputeVpnTunnel_regionFromGateway(t *testing.T) { ResourceName: "google_compute_vpn_tunnel.foobar", ImportState: true, ImportStateVerify: true, - ImportStateIdPrefix: fmt.Sprintf("%s/%s/", envvar.GetTestProjectFromEnv(), region), ImportStateVerifyIgnore: []string{"shared_secret", "detailed_status"}, }, }, diff --git a/google/services/compute/resource_usage_export_bucket.go b/google/services/compute/resource_usage_export_bucket.go index 48436da9fda..70bcc01f21c 100644 --- a/google/services/compute/resource_usage_export_bucket.go +++ b/google/services/compute/resource_usage_export_bucket.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/compute/v1" @@ -29,6 +30,10 @@ func ResourceProjectUsageBucket() *schema.Resource { Delete: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "bucket_name": { Type: schema.TypeString, diff --git a/google/services/container/data_source_google_container_cluster_test.go b/google/services/container/data_source_google_container_cluster_test.go index 2ec530b4b5c..f5574761435 100644 --- a/google/services/container/data_source_google_container_cluster_test.go +++ b/google/services/container/data_source_google_container_cluster_test.go @@ -28,6 +28,7 @@ func TestAccContainerClusterDatasource_zonal(t *testing.T) { "enable_autopilot": {}, "enable_tpu": {}, "pod_security_policy_config.#": {}, + "deletion_protection": {}, }, ), ), @@ -54,6 +55,7 @@ func TestAccContainerClusterDatasource_regional(t *testing.T) { "enable_autopilot": {}, "enable_tpu": {}, "pod_security_policy_config.#": {}, + "deletion_protection": {}, }, ), ), @@ -68,6 +70,7 @@ resource "google_container_cluster" "kubes" { name = "tf-test-cluster-%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } data "google_container_cluster" "kubes" { @@ -83,6 +86,7 @@ resource "google_container_cluster" "kubes" { name = "tf-test-cluster-%s" location = "us-central1" initial_node_count = 1 + deletion_protection = false } data "google_container_cluster" "kubes" { diff --git a/google/services/container/node_config.go b/google/services/container/node_config.go index 47af0e251cc..19587937115 100644 --- a/google/services/container/node_config.go +++ b/google/services/container/node_config.go @@ -25,8 +25,8 @@ func schemaLoggingVariant() *schema.Schema { return &schema.Schema{ Type: schema.TypeString, Optional: true, + Computed: true, Description: `Type of logging agent that is used as the default value for node pools in the cluster. Valid values include DEFAULT and MAX_THROUGHPUT.`, - Default: "DEFAULT", ValidateFunc: validation.StringInSlice([]string{"DEFAULT", "MAX_THROUGHPUT"}, false), } } @@ -380,14 +380,9 @@ func schemaNodeConfig() *schema.Schema { }, "taint": { - Type: schema.TypeList, - Optional: true, - // Computed=true because GKE Sandbox will automatically add taints to nodes that can/cannot run sandboxed pods. - Computed: true, - ForceNew: true, - // Legacy config mode allows explicitly defining an empty taint. - // See https://www.terraform.io/docs/configuration/attr-as-blocks.html - ConfigMode: schema.SchemaConfigModeAttr, + Type: schema.TypeList, + Optional: true, + ForceNew: true, Description: `List of Kubernetes taints to be applied to each node.`, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -414,6 +409,31 @@ func schemaNodeConfig() *schema.Schema { }, }, + "effective_taints": { + Type: schema.TypeList, + Computed: true, + Description: `List of kubernetes taints applied to each node.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Computed: true, + Description: `Key for taint.`, + }, + "value": { + Type: schema.TypeString, + Computed: true, + Description: `Value for taint.`, + }, + "effect": { + Type: schema.TypeString, + Computed: true, + Description: `Effect for taint.`, + }, + }, + }, + }, + "workload_metadata_config": { Computed: true, Type: schema.TypeList, @@ -828,8 +848,10 @@ func expandNodeConfig(v interface{}) *container.NodeConfig { Value: data["value"].(string), Effect: data["effect"].(string), } + nodeTaints = append(nodeTaints, taint) } + nc.Taints = nodeTaints } @@ -994,13 +1016,24 @@ func flattenNodeConfigDefaults(c *container.NodeConfigDefaults) []map[string]int return result } -func flattenNodeConfig(c *container.NodeConfig) []map[string]interface{} { +// v == old state of `node_config` +func flattenNodeConfig(c *container.NodeConfig, v interface{}) []map[string]interface{} { config := make([]map[string]interface{}, 0, 1) if c == nil { return config } + // default to no prior taint state if there are any issues + oldTaints := []interface{}{} + oldNodeConfigSchemaContainer := v.([]interface{}) + if len(oldNodeConfigSchemaContainer) != 0 { + oldNodeConfigSchema := oldNodeConfigSchemaContainer[0].(map[string]interface{}) + if vt, ok := oldNodeConfigSchema["taint"]; ok && len(vt.([]interface{})) > 0 { + oldTaints = vt.([]interface{}) + } + } + config = append(config, map[string]interface{}{ "machine_type": c.MachineType, "disk_size_gb": c.DiskSizeGb, @@ -1023,7 +1056,8 @@ func flattenNodeConfig(c *container.NodeConfig) []map[string]interface{} { "spot": c.Spot, "min_cpu_platform": c.MinCpuPlatform, "shielded_instance_config": flattenShieldedInstanceConfig(c.ShieldedInstanceConfig), - "taint": flattenTaints(c.Taints), + "taint": flattenTaints(c.Taints, oldTaints), + "effective_taints": flattenEffectiveTaints(c.Taints), "workload_metadata_config": flattenWorkloadMetadataConfig(c.WorkloadMetadataConfig), "confidential_nodes": flattenConfidentialNodes(c.ConfidentialNodes), "boot_disk_kms_key": c.BootDiskKmsKey, @@ -1151,7 +1185,31 @@ func flattenGKEReservationAffinity(c *container.ReservationAffinity) []map[strin return result } -func flattenTaints(c []*container.NodeTaint) []map[string]interface{} { +// flattenTaints records the set of taints already present in state. +func flattenTaints(c []*container.NodeTaint, oldTaints []interface{}) []map[string]interface{} { + taintKeys := map[string]struct{}{} + for _, raw := range oldTaints { + data := raw.(map[string]interface{}) + taintKey := data["key"].(string) + taintKeys[taintKey] = struct{}{} + } + + result := []map[string]interface{}{} + for _, taint := range c { + if _, ok := taintKeys[taint.Key]; ok { + result = append(result, map[string]interface{}{ + "key": taint.Key, + "value": taint.Value, + "effect": taint.Effect, + }) + } + } + + return result +} + +// flattenEffectiveTaints records the complete set of taints returned from GKE. +func flattenEffectiveTaints(c []*container.NodeTaint) []map[string]interface{} { result := []map[string]interface{}{} for _, taint := range c { result = append(result, map[string]interface{}{ @@ -1160,6 +1218,7 @@ func flattenTaints(c []*container.NodeTaint) []map[string]interface{} { "effect": taint.Effect, }) } + return result } diff --git a/google/services/container/resource_container_cluster.go b/google/services/container/resource_container_cluster.go index 7f5c2e874b0..8f4a946ac12 100644 --- a/google/services/container/resource_container_cluster.go +++ b/google/services/container/resource_container_cluster.go @@ -192,8 +192,15 @@ func ResourceContainerCluster() *schema.Resource { Delete: schema.DefaultTimeout(40 * time.Minute), }, - SchemaVersion: 1, + SchemaVersion: 2, MigrateState: resourceContainerClusterMigrateState, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceContainerClusterResourceV1().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceContainerClusterUpgradeV1, + Version: 1, + }, + }, Importer: &schema.ResourceImporter{ State: resourceContainerClusterStateImporter, @@ -249,6 +256,13 @@ func ResourceContainerCluster() *schema.Resource { Description: `The list of zones in which the cluster's nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster's zone.`, }, + "deletion_protection": { + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Whether or not to allow Terraform to destroy the instance. Defaults to true. Unless this field is set to false in Terraform state, a terraform destroy or terraform apply that would delete the cluster will fail.`, + }, + "addons_config": { Type: schema.TypeList, Optional: true, @@ -713,21 +727,12 @@ func ResourceContainerCluster() *schema.Resource { Description: ` Description of the cluster.`, }, - "enable_binary_authorization": { - Type: schema.TypeBool, - Optional: true, - Default: false, - Deprecated: "Deprecated in favor of binary_authorization.", - Description: `Enable Binary Authorization for this cluster. If enabled, all container images will be validated by Google Binary Authorization.`, - ConflictsWith: []string{"enable_autopilot", "binary_authorization"}, - }, "binary_authorization": { Type: schema.TypeList, Optional: true, DiffSuppressFunc: BinaryAuthorizationDiffSuppress, MaxItems: 1, Description: "Configuration options for the Binary Authorization feature.", - ConflictsWith: []string{"enable_binary_authorization"}, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { @@ -1219,11 +1224,10 @@ func ResourceContainerCluster() *schema.Resource { }, "provider": { Type: schema.TypeString, - Default: "PROVIDER_UNSPECIFIED", Optional: true, ValidateFunc: validation.StringInSlice([]string{"PROVIDER_UNSPECIFIED", "CALICO"}, false), DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("PROVIDER_UNSPECIFIED"), - Description: `The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.`, + Description: `The selected network policy provider.`, }, }, }, @@ -1918,7 +1922,7 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er EnableKubernetesAlpha: d.Get("enable_kubernetes_alpha").(bool), IpAllocationPolicy: ipAllocationBlock, Autoscaling: expandClusterAutoscaling(d.Get("cluster_autoscaling"), d), - BinaryAuthorization: expandBinaryAuthorization(d.Get("binary_authorization"), d.Get("enable_binary_authorization").(bool)), + BinaryAuthorization: expandBinaryAuthorization(d.Get("binary_authorization")), Autopilot: &container.Autopilot{ Enabled: d.Get("enable_autopilot").(bool), WorkloadPolicyConfig: workloadPolicyConfig, @@ -2119,29 +2123,26 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er if err := d.Set("operation", op.Name); err != nil { return fmt.Errorf("Error setting operation: %s", err) } + return nil default: // leaving default case to ensure this is non blocking } + // Try a GET on the cluster so we can see the state in debug logs. This will help classify error states. clusterGetCall := config.NewContainerClient(userAgent).Projects.Locations.Clusters.Get(containerClusterFullName(project, location, clusterName)) if config.UserProjectOverride { clusterGetCall.Header().Add("X-Goog-User-Project", project) } + _, getErr := clusterGetCall.Do() if getErr != nil { log.Printf("[WARN] Cluster %s was created in an error state and not found", clusterName) d.SetId("") } - if deleteErr := cleanFailedContainerCluster(d, meta); deleteErr != nil { - log.Printf("[WARN] Unable to clean up cluster from failed creation: %s", deleteErr) - // Leave ID set as the cluster likely still exists and should not be removed from state yet. - } else { - log.Printf("[WARN] Verified failed creation of cluster %s was cleaned up", d.Id()) - d.SetId("") - } - // The resource didn't actually create + // Don't clear cluster id, this will taint the resource + log.Printf("[WARN] GKE cluster %s was created in an error state, and has been marked as tainted", clusterName) return waitErr } @@ -2286,14 +2287,8 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro d.SetId("") } - if deleteErr := cleanFailedContainerCluster(d, meta); deleteErr != nil { - log.Printf("[WARN] Unable to clean up cluster from failed creation: %s", deleteErr) - // Leave ID set as the cluster likely still exists and should not be removed from state yet. - } else { - log.Printf("[WARN] Verified failed creation of cluster %s was cleaned up", d.Id()) - d.SetId("") - } - // The resource didn't actually create + // Don't clear cluster id, this will taint the resource + log.Printf("[WARN] GKE cluster %s was created in an error state, and has been marked as tainted", clusterName) return waitErr } } @@ -2380,17 +2375,8 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("cluster_autoscaling", flattenClusterAutoscaling(cluster.Autoscaling)); err != nil { return err } - binauthz_enabled := d.Get("binary_authorization.0.enabled").(bool) - legacy_binauthz_enabled := d.Get("enable_binary_authorization").(bool) - if !binauthz_enabled { - if err := d.Set("enable_binary_authorization", cluster.BinaryAuthorization != nil && cluster.BinaryAuthorization.Enabled); err != nil { - return fmt.Errorf("Error setting enable_binary_authorization: %s", err) - } - } - if !legacy_binauthz_enabled { - if err := d.Set("binary_authorization", flattenBinaryAuthorization(cluster.BinaryAuthorization)); err != nil { - return err - } + if err := d.Set("binary_authorization", flattenBinaryAuthorization(cluster.BinaryAuthorization)); err != nil { + return err } if autopilot := cluster.Autopilot; autopilot != nil { if err := d.Set("enable_autopilot", autopilot.Enabled); err != nil { @@ -2448,7 +2434,7 @@ func resourceContainerClusterRead(d *schema.ResourceData, meta interface{}) erro return fmt.Errorf("Error setting default_max_pods_per_node: %s", err) } } - if err := d.Set("node_config", flattenNodeConfig(cluster.NodeConfig)); err != nil { + if err := d.Set("node_config", flattenNodeConfig(cluster.NodeConfig, d.Get("node_config"))); err != nil { return err } if err := d.Set("project", project); err != nil { @@ -2713,7 +2699,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er if d.HasChange("binary_authorization") { req := &container.UpdateClusterRequest{ Update: &container.ClusterUpdate{ - DesiredBinaryAuthorization: expandBinaryAuthorization(d.Get("binary_authorization"), d.Get("enable_binary_authorization").(bool)), + DesiredBinaryAuthorization: expandBinaryAuthorization(d.Get("binary_authorization")), }, } @@ -3663,6 +3649,9 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er } func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) error { + if d.Get("deletion_protection").(bool) { + return fmt.Errorf("Cannot destroy cluster because deletion_protection is set to true. Set it to false to proceed with instance deletion.") + } config := meta.(*transport_tpg.Config) userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) if err != nil { @@ -3733,52 +3722,6 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er return nil } -// cleanFailedContainerCluster deletes clusters that failed but were -// created in an error state. Similar to resourceContainerClusterDelete -// but implemented in separate function as it doesn't try to lock already -// locked cluster state, does different error handling, and doesn't do retries. -func cleanFailedContainerCluster(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return err - } - - location, err := tpgresource.GetLocation(d, config) - if err != nil { - return err - } - - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - clusterName := d.Get("name").(string) - fullName := containerClusterFullName(project, location, clusterName) - - log.Printf("[DEBUG] Cleaning up failed GKE cluster %s", d.Get("name").(string)) - clusterDeleteCall := config.NewContainerClient(userAgent).Projects.Locations.Clusters.Delete(fullName) - if config.UserProjectOverride { - clusterDeleteCall.Header().Add("X-Goog-User-Project", project) - } - op, err := clusterDeleteCall.Do() - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Container Cluster %q", d.Get("name").(string))) - } - - // Wait until it's deleted - waitErr := ContainerOperationWait(config, op, project, location, "deleting GKE cluster", userAgent, d.Timeout(schema.TimeoutDelete)) - if waitErr != nil { - return waitErr - } - - log.Printf("[INFO] GKE cluster %s has been deleted", d.Id()) - d.SetId("") - return nil -} - var containerClusterRestingStates = RestingStates{ "RUNNING": ReadyState, "DEGRADED": ErrorState, @@ -4318,11 +4261,11 @@ func expandNotificationConfig(configured interface{}) *container.NotificationCon } } -func expandBinaryAuthorization(configured interface{}, legacy_enabled bool) *container.BinaryAuthorization { +func expandBinaryAuthorization(configured interface{}) *container.BinaryAuthorization { l := configured.([]interface{}) if len(l) == 0 || l[0] == nil { return &container.BinaryAuthorization{ - Enabled: legacy_enabled, + Enabled: false, ForceSendFields: []string{"Enabled"}, } } @@ -5467,6 +5410,11 @@ func resourceContainerClusterStateImporter(d *schema.ResourceData, meta interfac if err := d.Set("location", location); err != nil { return nil, fmt.Errorf("Error setting location: %s", err) } + + if err := d.Set("deletion_protection", true); err != nil { + return nil, fmt.Errorf("Error setting deletion_protection: %s", err) + } + if _, err := containerClusterAwaitRestingState(config, project, location, clusterName, userAgent, d.Timeout(schema.TimeoutCreate)); err != nil { return nil, err } diff --git a/google/services/container/resource_container_cluster_migrate.go b/google/services/container/resource_container_cluster_migrate.go index eda75ba9a04..7166a1281e0 100644 --- a/google/services/container/resource_container_cluster_migrate.go +++ b/google/services/container/resource_container_cluster_migrate.go @@ -23,6 +23,9 @@ func resourceContainerClusterMigrateState( case 0: log.Println("[INFO] Found Container Cluster State v0; migrating to v1") return migrateClusterStateV0toV1(is) + case 1: + log.Println("[INFO] Found Container Cluster State v1 in legacy migration function; returning as non-op") + return is, nil default: return is, fmt.Errorf("Unexpected schema version: %d", v) } diff --git a/google/services/container/resource_container_cluster_migratev1.go b/google/services/container/resource_container_cluster_migratev1.go new file mode 100644 index 00000000000..915a71eec2f --- /dev/null +++ b/google/services/container/resource_container_cluster_migratev1.go @@ -0,0 +1,1627 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package container + +import ( + "context" + "fmt" + "log" + "regexp" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-google/google/tpgresource" + "github.com/hashicorp/terraform-provider-google/google/verify" +) + +func ResourceContainerClusterUpgradeV1(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + log.Printf("[DEBUG] Applying container cluster migration to schema version V2.") + + rawState["deletion_protection"] = true + return rawState, nil +} + +func resourceContainerClusterResourceV1() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the cluster, unique within the project and location.`, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if len(value) > 40 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 40 characters", k)) + } + if !regexp.MustCompile("^[a-z0-9-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q can only contain lowercase letters, numbers and hyphens", k)) + } + if !regexp.MustCompile("^[a-z]").MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must start with a letter", k)) + } + if !regexp.MustCompile("[a-z0-9]$").MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must end with a number or a letter", k)) + } + return + }, + }, + + "operation": { + Type: schema.TypeString, + Computed: true, + }, + + "location": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The location (region or zone) in which the cluster master will be created, as well as the default node location. If you specify a zone (such as us-central1-a), the cluster will be a zonal cluster with a single cluster master. If you specify a region (such as us-west1), the cluster will be a regional cluster with multiple masters spread across zones in the region, and with default node locations in those zones as well.`, + }, + + "node_locations": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The list of zones in which the cluster's nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster's zone.`, + }, + + "addons_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The configuration for addons supported by GKE.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "http_load_balancing": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster. It is enabled by default; set disabled = true to disable.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "horizontal_pod_autoscaling": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the Horizontal Pod Autoscaling addon, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods. It ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service. It is enabled by default; set disabled = true to disable.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "network_policy_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `Whether we should enable the network policy addon for the master. This must be enabled in order to enable network policy for the nodes. To enable this, you must also define a network_policy block, otherwise nothing will happen. It can only be disabled if the nodes already do not have network policies enabled. Defaults to disabled; set disabled = false to enable.`, + ConflictsWith: []string{"enable_autopilot"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "gcp_filestore_csi_driver_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the Filestore CSI driver addon, which allows the usage of filestore instance as volumes. Defaults to disabled; set enabled = true to enable.`, + ConflictsWith: []string{"enable_autopilot"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "cloudrun_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the CloudRun addon. It is disabled by default. Set disabled = false to enable.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + }, + "load_balancer_type": { + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"LOAD_BALANCER_TYPE_INTERNAL"}, false), + Optional: true, + }, + }, + }, + }, + "dns_cache_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the NodeLocal DNSCache addon. It is disabled by default. Set enabled = true to enable.`, + ConflictsWith: []string{"enable_autopilot"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "gce_persistent_disk_csi_driver_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. Set enabled = true to enable. The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters for the following versions: Linux clusters: GKE version 1.18.10-gke.2100 or later, or 1.19.3-gke.2100 or later.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "gke_backup_agent_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the Backup for GKE Agent addon. It is disabled by default. Set enabled = true to enable.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "gcs_fuse_csi_driver_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The status of the GCS Fuse CSI driver addon, which allows the usage of gcs bucket as volumes. Defaults to disabled; set enabled = true to enable.`, + ConflictsWith: []string{"enable_autopilot"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "config_connector_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + AtLeastOneOf: addonsConfigKeys, + MaxItems: 1, + Description: `The of the Config Connector addon.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + }, + }, + }, + + "cluster_autoscaling": { + Type: schema.TypeList, + MaxItems: 1, + // This field is Optional + Computed because we automatically set the + // enabled value to false if the block is not returned in API responses. + Optional: true, + Computed: true, + Description: `Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster's workload. See the guide to using Node Auto-Provisioning for more details.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + ConflictsWith: []string{"enable_autopilot"}, + Description: `Whether node auto-provisioning is enabled. Resource limits for cpu and memory must be defined to enable node auto-provisioning.`, + }, + "resource_limits": { + Type: schema.TypeList, + Optional: true, + ConflictsWith: []string{"enable_autopilot"}, + DiffSuppressFunc: suppressDiffForAutopilot, + Description: `Global constraints for machine resources in the cluster. Configuring the cpu and memory types is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "resource_type": { + Type: schema.TypeString, + Required: true, + Description: `The type of the resource. For example, cpu and memory. See the guide to using Node Auto-Provisioning for a list of types.`, + }, + "minimum": { + Type: schema.TypeInt, + Optional: true, + Description: `Minimum amount of the resource in the cluster.`, + }, + "maximum": { + Type: schema.TypeInt, + Optional: true, + Description: `Maximum amount of the resource in the cluster.`, + }, + }, + }, + }, + "auto_provisioning_defaults": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Contains defaults for a node pool created by NAP.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "oauth_scopes": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + DiffSuppressFunc: containerClusterAddedScopesSuppress, + Description: `Scopes that are used by NAP when creating node pools.`, + }, + "service_account": { + Type: schema.TypeString, + Optional: true, + Default: "default", + Description: `The Google Cloud Platform Service Account to be used by the node VMs.`, + }, + "disk_size": { + Type: schema.TypeInt, + Optional: true, + Default: 100, + Description: `Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB.`, + DiffSuppressFunc: suppressDiffForAutopilot, + ValidateFunc: validation.IntAtLeast(10), + }, + "disk_type": { + Type: schema.TypeString, + Optional: true, + Default: "pd-standard", + Description: `Type of the disk attached to each node.`, + DiffSuppressFunc: suppressDiffForAutopilot, + ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd", "pd-balanced"}, false), + }, + "image_type": { + Type: schema.TypeString, + Optional: true, + Default: "COS_CONTAINERD", + Description: `The default image type used by NAP once a new node pool is being created.`, + DiffSuppressFunc: suppressDiffForAutopilot, + ValidateFunc: validation.StringInSlice([]string{"COS_CONTAINERD", "COS", "UBUNTU_CONTAINERD", "UBUNTU"}, false), + }, + "min_cpu_platform": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("automatic"), + Description: `Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such as Intel Haswell.`, + }, + "boot_disk_kms_key": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool.`, + }, + "shielded_instance_config": { + Type: schema.TypeList, + Optional: true, + Description: `Shielded Instance options.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_secure_boot": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Defines whether the instance has Secure Boot enabled.`, + AtLeastOneOf: []string{ + "cluster_autoscaling.0.auto_provisioning_defaults.0.shielded_instance_config.0.enable_secure_boot", + "cluster_autoscaling.0.auto_provisioning_defaults.0.shielded_instance_config.0.enable_integrity_monitoring", + }, + }, + "enable_integrity_monitoring": { + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Defines whether the instance has integrity monitoring enabled.`, + AtLeastOneOf: []string{ + "cluster_autoscaling.0.auto_provisioning_defaults.0.shielded_instance_config.0.enable_secure_boot", + "cluster_autoscaling.0.auto_provisioning_defaults.0.shielded_instance_config.0.enable_integrity_monitoring", + }, + }, + }, + }, + }, + "management": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `NodeManagement configuration for this NodePool.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "auto_upgrade": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + Description: `Specifies whether node auto-upgrade is enabled for the node pool. If enabled, node auto-upgrade helps keep the nodes in your node pool up to date with the latest release version of Kubernetes.`, + }, + "auto_repair": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + Description: `Specifies whether the node auto-repair is enabled for the node pool. If enabled, the nodes in this node pool will be monitored and, if they fail health checks too many times, an automatic repair action will be triggered.`, + }, + "upgrade_options": { + Type: schema.TypeList, + Computed: true, + Description: `Specifies the Auto Upgrade knobs for the node pool.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "auto_upgrade_start_time": { + Type: schema.TypeString, + Computed: true, + Description: `This field is set when upgrades are about to commence with the approximate start time for the upgrades, in RFC3339 text format.`, + }, + "description": { + Type: schema.TypeString, + Computed: true, + Description: `This field is set when upgrades are about to commence with the description of the upgrade.`, + }, + }, + }, + }, + }, + }, + }, + "upgrade_settings": { + Type: schema.TypeList, + Optional: true, + Description: `Specifies the upgrade settings for NAP created node pools`, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "max_surge": { + Type: schema.TypeInt, + Optional: true, + Description: `The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.`, + }, + "max_unavailable": { + Type: schema.TypeInt, + Optional: true, + Description: `The maximum number of nodes that can be simultaneously unavailable during the upgrade process.`, + }, + "strategy": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `Update strategy of the node pool.`, + ValidateFunc: validation.StringInSlice([]string{"NODE_POOL_UPDATE_STRATEGY_UNSPECIFIED", "BLUE_GREEN", "SURGE"}, false), + }, + "blue_green_settings": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Settings for blue-green upgrade strategy.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "node_pool_soak_duration": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `Time needed after draining entire blue pool. After this period, blue pool will be cleaned up. + + A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".`, + }, + "standard_rollout_policy": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Standard policy for the blue-green upgrade.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "batch_percentage": { + Type: schema.TypeFloat, + Optional: true, + Computed: true, + ValidateFunc: validation.FloatBetween(0.0, 1.0), + ExactlyOneOf: []string{ + "cluster_autoscaling.0.auto_provisioning_defaults.0.upgrade_settings.0.blue_green_settings.0.standard_rollout_policy.0.batch_percentage", + "cluster_autoscaling.0.auto_provisioning_defaults.0.upgrade_settings.0.blue_green_settings.0.standard_rollout_policy.0.batch_node_count", + }, + Description: `Percentage of the bool pool nodes to drain in a batch. The range of this field should be (0.0, 1.0].`, + }, + "batch_node_count": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ExactlyOneOf: []string{ + "cluster_autoscaling.0.auto_provisioning_defaults.0.upgrade_settings.0.blue_green_settings.0.standard_rollout_policy.0.batch_percentage", + "cluster_autoscaling.0.auto_provisioning_defaults.0.upgrade_settings.0.blue_green_settings.0.standard_rollout_policy.0.batch_node_count", + }, + Description: `Number of blue nodes to drain in a batch.`, + }, + "batch_soak_duration": { + Type: schema.TypeString, + Optional: true, + Default: "0s", + Description: `Soak time after each batch gets drained. + + A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + + "cluster_ipv4_cidr": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.OrEmpty(verify.ValidateRFC1918Network(8, 32)), + ConflictsWith: []string{"ip_allocation_policy"}, + Description: `The IP address range of the Kubernetes pods in this cluster in CIDR notation (e.g. 10.96.0.0/14). Leave blank to have one automatically chosen or specify a /14 block in 10.0.0.0/8. This field will only work for routes-based clusters, where ip_allocation_policy is not defined.`, + }, + + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: ` Description of the cluster.`, + }, + + "binary_authorization": { + Type: schema.TypeList, + Optional: true, + DiffSuppressFunc: BinaryAuthorizationDiffSuppress, + MaxItems: 1, + Description: "Configuration options for the Binary Authorization feature.", + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Deprecated: "Deprecated in favor of evaluation_mode.", + Description: "Enable Binary Authorization for this cluster.", + ConflictsWith: []string{"enable_autopilot", "binary_authorization.0.evaluation_mode"}, + }, + "evaluation_mode": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"DISABLED", "PROJECT_SINGLETON_POLICY_ENFORCE"}, false), + Description: "Mode of operation for Binary Authorization policy evaluation.", + ConflictsWith: []string{"binary_authorization.0.enabled"}, + }, + }, + }, + }, + + "enable_kubernetes_alpha": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Default: false, + Description: `Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days.`, + }, + + "enable_k8s_beta_apis": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Configuration for Kubernetes Beta APIs.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled_apis": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Enabled Kubernetes Beta APIs.`, + }, + }, + }, + }, + + "enable_tpu": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Whether to enable Cloud TPU resources in this cluster.`, + }, + + "enable_legacy_abac": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether the ABAC authorizer is enabled for this cluster. When enabled, identities in the system, including service accounts, nodes, and controllers, will have statically granted permissions beyond those provided by the RBAC configuration or IAM. Defaults to false.`, + }, + + "enable_shielded_nodes": { + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Enable Shielded Nodes features on all nodes in this cluster. Defaults to true.`, + ConflictsWith: []string{"enable_autopilot"}, + }, + + "enable_autopilot": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Enable Autopilot for this cluster.`, + // ConflictsWith: many fields, see https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#comparison. The conflict is only set one-way, on other fields w/ this field. + }, + + "allow_net_admin": { + Type: schema.TypeBool, + Optional: true, + Description: `Enable NET_ADMIN for this cluster.`, + }, + + "authenticator_groups_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration for the Google Groups for GKE feature.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "security_group": { + Type: schema.TypeString, + Required: true, + Description: `The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in format gke-security-groups@yourdomain.com.`, + }, + }, + }, + }, + + "initial_node_count": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Description: `The number of nodes to create in this cluster's default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set if node_pool is not set. If you're using google_container_node_pool objects with no default node pool, you'll need to set this to a value of at least 1, alongside setting remove_default_node_pool to true.`, + }, + + "logging_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Logging configuration for the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_components": { + Type: schema.TypeList, + Required: true, + Description: `GKE components exposing logs. Valid values include SYSTEM_COMPONENTS, APISERVER, CONTROLLER_MANAGER, SCHEDULER, and WORKLOADS.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"SYSTEM_COMPONENTS", "APISERVER", "CONTROLLER_MANAGER", "SCHEDULER", "WORKLOADS"}, false), + }, + }, + }, + }, + }, + + "logging_service": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{"logging.googleapis.com", "logging.googleapis.com/kubernetes", "none"}, false), + Description: `The logging service that the cluster should write logs to. Available options include logging.googleapis.com(Legacy Stackdriver), logging.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Logging), and none. Defaults to logging.googleapis.com/kubernetes.`, + }, + + "maintenance_policy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The maintenance policy to use for the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "daily_maintenance_window": { + Type: schema.TypeList, + Optional: true, + ExactlyOneOf: []string{ + "maintenance_policy.0.daily_maintenance_window", + "maintenance_policy.0.recurring_window", + }, + MaxItems: 1, + Description: `Time window specified for daily maintenance operations. Specify start_time in RFC3339 format "HH:MM”, where HH : [00-23] and MM : [00-59] GMT.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "start_time": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateRFC3339Time, + DiffSuppressFunc: tpgresource.Rfc3339TimeDiffSuppress, + }, + "duration": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "recurring_window": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ExactlyOneOf: []string{ + "maintenance_policy.0.daily_maintenance_window", + "maintenance_policy.0.recurring_window", + }, + Description: `Time window for recurring maintenance operations.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "start_time": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateRFC3339Date, + }, + "end_time": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateRFC3339Date, + }, + "recurrence": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: rfc5545RecurrenceDiffSuppress, + }, + }, + }, + }, + "maintenance_exclusion": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 20, + Description: `Exceptions to maintenance window. Non-emergency maintenance should not occur in these windows.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "exclusion_name": { + Type: schema.TypeString, + Required: true, + }, + "start_time": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateRFC3339Date, + }, + "end_time": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidateRFC3339Date, + }, + "exclusion_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Maintenance exclusion related options.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "scope": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"NO_UPGRADES", "NO_MINOR_UPGRADES", "NO_MINOR_OR_NODE_UPGRADES"}, false), + Description: `The scope of automatic upgrades to restrict in the exclusion window.`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + + "security_posture_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Computed: true, + Description: `Defines the config needed to enable/disable features for the Security Posture API`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "mode": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{"DISABLED", "BASIC", "MODE_UNSPECIFIED"}, false), + Description: `Sets the mode of the Kubernetes security posture API's off-cluster features. Available options include DISABLED and BASIC.`, + DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("MODE_UNSPECIFIED"), + }, + "vulnerability_mode": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{"VULNERABILITY_DISABLED", "VULNERABILITY_BASIC", "VULNERABILITY_MODE_UNSPECIFIED"}, false), + Description: `Sets the mode of the Kubernetes security posture API's workload vulnerability scanning. Available options include VULNERABILITY_DISABLED and VULNERABILITY_BASIC.`, + DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("VULNERABILITY_MODE_UNSPECIFIED"), + }, + }, + }, + }, + "monitoring_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Monitoring configuration for the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_components": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Description: `GKE components exposing metrics. Valid values include SYSTEM_COMPONENTS, APISERVER, SCHEDULER, CONTROLLER_MANAGER, STORAGE, HPA, POD, DAEMONSET, DEPLOYMENT and STATEFULSET.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "managed_prometheus": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration for Google Cloud Managed Services for Prometheus.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Whether or not the managed collection is enabled.`, + }, + }, + }, + }, + "advanced_datapath_observability_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 2, + Description: `Configuration of Advanced Datapath Observability features.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_metrics": { + Type: schema.TypeBool, + Required: true, + Description: `Whether or not the advanced datapath metrics are enabled.`, + }, + "relay_mode": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `Mode used to make Relay available.`, + ValidateFunc: validation.StringInSlice([]string{"DISABLED", "INTERNAL_VPC_LB", "EXTERNAL_LB"}, false), + }, + }, + }, + }, + }, + }, + }, + + "notification_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The notification config for sending cluster upgrade notifications`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "pubsub": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Description: `Notification config for Cloud Pub/Sub`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Whether or not the notification config is enabled`, + }, + "topic": { + Type: schema.TypeString, + Optional: true, + Description: `The pubsub topic to push upgrade notifications to. Must be in the same project as the cluster. Must be in the format: projects/{project}/topics/{topic}.`, + }, + "filter": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Allows filtering to one or more specific event types. If event types are present, those and only those event types will be transmitted to the cluster. Other types will be skipped. If no filter is specified, or no event types are present, all event types will be sent`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "event_type": { + Type: schema.TypeList, + Required: true, + Description: `Can be used to filter what notifications are sent. Valid values include include UPGRADE_AVAILABLE_EVENT, UPGRADE_EVENT and SECURITY_BULLETIN_EVENT`, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"UPGRADE_AVAILABLE_EVENT", "UPGRADE_EVENT", "SECURITY_BULLETIN_EVENT"}, false), + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + + "confidential_nodes": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Description: `Configuration for the confidential nodes feature, which makes nodes run on confidential VMs. Warning: This configuration can't be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + Description: `Whether Confidential Nodes feature is enabled for all nodes in this cluster.`, + }, + }, + }, + }, + + "master_auth": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Computed: true, + Description: `The authentication information for accessing the Kubernetes master. Some values in this block are only returned by the API if your service account has permission to get credentials for your GKE cluster. If you see an unexpected diff unsetting your client cert, ensure you have the container.clusters.getCredentials permission.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "client_certificate_config": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Description: `Whether client certificate authorization is enabled for this cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "issue_client_certificate": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + Description: `Whether client certificate authorization is enabled for this cluster.`, + }, + }, + }, + }, + + "client_certificate": { + Type: schema.TypeString, + Computed: true, + Description: `Base64 encoded public certificate used by clients to authenticate to the cluster endpoint.`, + }, + + "client_key": { + Type: schema.TypeString, + Computed: true, + Sensitive: true, + Description: `Base64 encoded private key used by clients to authenticate to the cluster endpoint.`, + }, + + "cluster_ca_certificate": { + Type: schema.TypeString, + Computed: true, + Description: `Base64 encoded public certificate that is the root of trust for the cluster.`, + }, + }, + }, + }, + + "master_authorized_networks_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: masterAuthorizedNetworksConfig, + Description: `The desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists).`, + }, + + "min_master_version": { + Type: schema.TypeString, + Optional: true, + Description: `The minimum version of the master. GKE will auto-update the master to new versions, so this does not guarantee the current master version--use the read-only master_version field to obtain that. If unset, the cluster's version will be set by GKE to the version of the most recent official release (which is not necessarily the latest version).`, + }, + + "monitoring_service": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice([]string{"monitoring.googleapis.com", "monitoring.googleapis.com/kubernetes", "none"}, false), + Description: `The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com(Legacy Stackdriver), monitoring.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Monitoring), and none. Defaults to monitoring.googleapis.com/kubernetes.`, + }, + + "network": { + Type: schema.TypeString, + Optional: true, + Default: "default", + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine network to which the cluster is connected. For Shared VPC, set this to the self link of the shared network.`, + }, + + "network_policy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Configuration options for the NetworkPolicy feature.`, + ConflictsWith: []string{"enable_autopilot"}, + DiffSuppressFunc: containerClusterNetworkPolicyDiffSuppress, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Whether network policy is enabled on the cluster.`, + }, + "provider": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"PROVIDER_UNSPECIFIED", "CALICO"}, false), + DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("PROVIDER_UNSPECIFIED"), + Description: `The selected network policy provider.`, + }, + }, + }, + }, + + "node_config": clusterSchemaNodeConfig(), + + "node_pool": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, // TODO: Add ability to add/remove nodePools + Elem: &schema.Resource{ + Schema: schemaNodePool, + }, + Description: `List of node pools associated with this cluster. See google_container_node_pool for schema. Warning: node pools defined inside a cluster can't be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster. Unless you absolutely need the ability to say "these are the only node pools associated with this cluster", use the google_container_node_pool resource instead of this property.`, + ConflictsWith: []string{"enable_autopilot"}, + }, + + "node_pool_defaults": clusterSchemaNodePoolDefaults(), + + "node_pool_auto_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Node pool configs that apply to all auto-provisioned node pools in autopilot clusters and node auto-provisioning enabled clusters.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "network_tags": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Collection of Compute Engine network tags that can be applied to a node's underlying VM instance.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "tags": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `List of network tags applied to auto-provisioned node pools.`, + }, + }, + }, + }, + }, + }, + }, + + "node_version": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The Kubernetes version on the nodes. Must either be unset or set to the same value as min_master_version on create. Defaults to the default version set by GKE which is not necessarily the latest version. This only affects nodes in the default node pool. While a fuzzy version can be specified, it's recommended that you specify explicit versions as Terraform will see spurious diffs when fuzzy versions are used. See the google_container_engine_versions data source's version_prefix field to approximate fuzzy versions in a Terraform-compatible way. To update nodes in other node pools, use the version attribute on the node pool.`, + }, + + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, + }, + + "subnetwork": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine subnetwork in which the cluster's instances are launched.`, + }, + + "self_link": { + Type: schema.TypeString, + Computed: true, + Description: `Server-defined URL for the resource.`, + }, + + "endpoint": { + Type: schema.TypeString, + Computed: true, + Description: `The IP address of this cluster's Kubernetes master.`, + }, + + "master_version": { + Type: schema.TypeString, + Computed: true, + Description: `The current version of the master in the cluster. This may be different than the min_master_version set in the config if the master has been updated by GKE.`, + }, + + "services_ipv4_cidr": { + Type: schema.TypeString, + Computed: true, + Description: `The IP address range of the Kubernetes services in this cluster, in CIDR notation (e.g. 1.2.3.4/29). Service addresses are typically put in the last /16 from the container CIDR.`, + }, + + "ip_allocation_policy": { + Type: schema.TypeList, + MaxItems: 1, + ForceNew: true, + Computed: true, + Optional: true, + ConflictsWith: []string{"cluster_ipv4_cidr"}, + Description: `Configuration of cluster IP allocation for VPC-native clusters. Adding this block enables IP aliasing, making the cluster VPC-native instead of routes-based.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + // GKE creates/deletes secondary ranges in VPC + "cluster_ipv4_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: ipAllocationRangeFields, + DiffSuppressFunc: tpgresource.CidrOrSizeDiffSuppress, + Description: `The IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.`, + }, + + "services_ipv4_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: ipAllocationRangeFields, + DiffSuppressFunc: tpgresource.CidrOrSizeDiffSuppress, + Description: `The IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.`, + }, + + // User manages secondary ranges manually + "cluster_secondary_range_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: ipAllocationCidrBlockFields, + Description: `The name of the existing secondary range in the cluster's subnetwork to use for pod IP addresses. Alternatively, cluster_ipv4_cidr_block can be used to automatically create a GKE-managed one.`, + }, + + "services_secondary_range_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ConflictsWith: ipAllocationCidrBlockFields, + Description: `The name of the existing secondary range in the cluster's subnetwork to use for service ClusterIPs. Alternatively, services_ipv4_cidr_block can be used to automatically create a GKE-managed one.`, + }, + + "stack_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: "IPV4", + ValidateFunc: validation.StringInSlice([]string{"IPV4", "IPV4_IPV6"}, false), + Description: `The IP Stack type of the cluster. Choose between IPV4 and IPV4_IPV6. Default type is IPV4 Only if not set`, + }, + "pod_cidr_overprovision_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + ForceNew: true, + MaxItems: 1, + Description: `Configuration for cluster level pod cidr overprovision. Default is disabled=false.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + }, + "additional_pod_ranges_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Description: `AdditionalPodRangesConfig is the configuration for additional pod secondary ranges supporting the ClusterUpdate message.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "pod_range_names": { + Type: schema.TypeSet, + MinItems: 1, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Name for pod secondary ipv4 range which has the actual range defined ahead.`, + }, + }, + }, + }, + }, + }, + }, + + "networking_mode": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{"VPC_NATIVE", "ROUTES"}, false), + Description: `Determines whether alias IPs or routes will be used for pod IPs in the cluster.`, + }, + + "remove_default_node_pool": { + Type: schema.TypeBool, + Optional: true, + Description: `If true, deletes the default node pool upon cluster creation. If you're using google_container_node_pool resources with no default node pool, this should be set to true, alongside setting initial_node_count to at least 1.`, + ConflictsWith: []string{"enable_autopilot"}, + }, + + "private_cluster_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + DiffSuppressFunc: containerClusterPrivateClusterConfigSuppress, + Description: `Configuration for private clusters, clusters with private nodes.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + // enable_private_endpoint is orthogonal to private_endpoint_subnetwork. + // User can create a private_cluster_config block without including + // either one of those two fields. Both fields are optional. + // At the same time, we use 'AtLeastOneOf' to prevent an empty block + // like 'private_cluster_config{}' + "enable_private_endpoint": { + Type: schema.TypeBool, + Optional: true, + AtLeastOneOf: privateClusterConfigKeys, + DiffSuppressFunc: containerClusterPrivateClusterConfigSuppress, + Description: `When true, the cluster's private endpoint is used as the cluster endpoint and access through the public endpoint is disabled. When false, either endpoint can be used.`, + }, + "enable_private_nodes": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + AtLeastOneOf: privateClusterConfigKeys, + DiffSuppressFunc: containerClusterPrivateClusterConfigSuppress, + Description: `Enables the private cluster feature, creating a private endpoint on the cluster. In a private cluster, nodes only have RFC 1918 private addresses and communicate with the master's private endpoint via private networking.`, + }, + "master_ipv4_cidr_block": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + AtLeastOneOf: privateClusterConfigKeys, + ValidateFunc: verify.OrEmpty(validation.IsCIDRNetwork(28, 28)), + Description: `The IP range in CIDR notation to use for the hosted master network. This range will be used for assigning private IP addresses to the cluster master(s) and the ILB VIP. This range must not overlap with any other ranges in use within the cluster's network, and it must be a /28 subnet. See Private Cluster Limitations for more details. This field only applies to private clusters, when enable_private_nodes is true.`, + }, + "peering_name": { + Type: schema.TypeString, + Computed: true, + Description: `The name of the peering between this cluster and the Google owned VPC.`, + }, + "private_endpoint": { + Type: schema.TypeString, + Computed: true, + Description: `The internal IP address of this cluster's master endpoint.`, + }, + "private_endpoint_subnetwork": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + AtLeastOneOf: privateClusterConfigKeys, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `Subnetwork in cluster's network where master's endpoint will be provisioned.`, + }, + "public_endpoint": { + Type: schema.TypeString, + Computed: true, + Description: `The external IP address of this cluster's master endpoint.`, + }, + "master_global_access_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + AtLeastOneOf: privateClusterConfigKeys, + Description: "Controls cluster master global access settings.", + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Whether the cluster master is accessible globally or not.`, + }, + }, + }, + }, + }, + }, + }, + + "resource_labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The GCE resource labels (a map of key/value pairs) to be applied to the cluster.`, + }, + + "label_fingerprint": { + Type: schema.TypeString, + Computed: true, + Description: `The fingerprint of the set of labels for this cluster.`, + }, + + "default_max_pods_per_node": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The default maximum number of pods per node in this cluster. This doesn't work on "routes-based" clusters, clusters that don't have IP Aliasing enabled.`, + ConflictsWith: []string{"enable_autopilot"}, + }, + + "vertical_pod_autoscaling": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Vertical Pod Autoscaling automatically adjusts the resources of pods controlled by it.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Enables vertical pod autoscaling.`, + }, + }, + }, + }, + "workload_identity_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + // Computed is unsafe to remove- this API may return `"workloadIdentityConfig": {},` or omit the key entirely + // and both will be valid. Note that we don't handle the case where the API returns nothing & the user has defined + // workload_identity_config today. + Computed: true, + Description: `Configuration for the use of Kubernetes Service Accounts in GCP IAM policies.`, + ConflictsWith: []string{"enable_autopilot"}, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "workload_pool": { + Type: schema.TypeString, + Optional: true, + Description: "The workload pool to attach all Kubernetes service accounts to.", + }, + }, + }, + }, + + "service_external_ips_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `If set, and enabled=true, services with external ips field will not be blocked`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `When enabled, services with exterenal ips specified will be allowed.`, + }, + }, + }, + }, + + "mesh_certificates": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `If set, and enable_certificates=true, the GKE Workload Identity Certificates controller and node agent will be deployed in the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_certificates": { + Type: schema.TypeBool, + Required: true, + Description: `When enabled the GKE Workload Identity Certificates controller and node agent will be deployed in the cluster.`, + }, + }, + }, + }, + + "database_encryption": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Application-layer Secrets Encryption settings. The object format is {state = string, key_name = string}. Valid values of state are: "ENCRYPTED"; "DECRYPTED". key_name is the name of a CloudKMS key.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "state": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"ENCRYPTED", "DECRYPTED"}, false), + Description: `ENCRYPTED or DECRYPTED.`, + }, + "key_name": { + Type: schema.TypeString, + Optional: true, + Description: `The key to use to encrypt/decrypt secrets.`, + }, + }, + }, + }, + + "release_channel": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration options for the Release channel feature, which provide more control over automatic upgrades of your GKE clusters. Note that removing this field from your config will not unenroll it. Instead, use the "UNSPECIFIED" channel.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "channel": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"UNSPECIFIED", "RAPID", "REGULAR", "STABLE"}, false), + Description: `The selected release channel. Accepted values are: +* UNSPECIFIED: Not set. +* RAPID: Weekly upgrade cadence; Early testers and developers who requires new features. +* REGULAR: Multiple per month upgrade cadence; Production users who need features not yet offered in the Stable channel. +* STABLE: Every few months upgrade cadence; Production users who need stability above all else, and for whom frequent upgrades are too risky.`, + }, + }, + }, + }, + + "tpu_ipv4_cidr_block": { + Computed: true, + Type: schema.TypeString, + Description: `The IP address range of the Cloud TPUs in this cluster, in CIDR notation (e.g. 1.2.3.4/29).`, + }, + + "default_snat_status": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Whether the cluster disables default in-node sNAT rules. In-node sNAT rules will be disabled when defaultSnatStatus is disabled.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "disabled": { + Type: schema.TypeBool, + Required: true, + Description: `When disabled is set to false, default IP masquerade rules will be applied to the nodes to prevent sNAT on cluster internal traffic.`, + }, + }, + }, + }, + + "datapath_provider": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Computed: true, + Description: `The desired datapath provider for this cluster. By default, uses the IPTables-based kube-proxy implementation.`, + ValidateFunc: validation.StringInSlice([]string{"DATAPATH_PROVIDER_UNSPECIFIED", "LEGACY_DATAPATH", "ADVANCED_DATAPATH"}, false), + DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("DATAPATH_PROVIDER_UNSPECIFIED"), + }, + + "enable_intranode_visibility": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + Description: `Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network.`, + ConflictsWith: []string{"enable_autopilot"}, + }, + "enable_l4_ilb_subsetting": { + Type: schema.TypeBool, + Optional: true, + Description: `Whether L4ILB Subsetting is enabled for this cluster.`, + Default: false, + }, + "private_ipv6_google_access": { + Type: schema.TypeString, + Optional: true, + Description: `The desired state of IPv6 connectivity to Google Services. By default, no private IPv6 access to or from Google Services (all access will be via IPv4).`, + Computed: true, + }, + + "cost_management_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Description: `Cost management configuration for the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `Whether to enable GKE cost allocation. When you enable GKE cost allocation, the cluster name and namespace of your GKE workloads appear in the labels field of the billing export to BigQuery. Defaults to false.`, + }, + }, + }, + }, + + "resource_usage_export_config": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Description: `Configuration for the ResourceUsageExportConfig feature.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_network_egress_metering": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `Whether to enable network egress metering for this cluster. If enabled, a daemonset will be created in the cluster to meter network egress traffic.`, + }, + "enable_resource_consumption_metering": { + Type: schema.TypeBool, + Optional: true, + Default: true, + Description: `Whether to enable resource consumption metering on this cluster. When enabled, a table will be created in the resource export BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export. Defaults to true.`, + }, + "bigquery_destination": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Description: `Parameters for using BigQuery as the destination of resource usage export.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dataset_id": { + Type: schema.TypeString, + Required: true, + Description: `The ID of a BigQuery Dataset.`, + }, + }, + }, + }, + }, + }, + }, + "dns_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + ForceNew: true, + DiffSuppressFunc: suppressDiffForAutopilot, + Description: `Configuration for Cloud DNS for Kubernetes Engine.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cluster_dns": { + Type: schema.TypeString, + Default: "PROVIDER_UNSPECIFIED", + ValidateFunc: validation.StringInSlice([]string{"PROVIDER_UNSPECIFIED", "PLATFORM_DEFAULT", "CLOUD_DNS"}, false), + Description: `Which in-cluster DNS provider should be used.`, + Optional: true, + }, + "cluster_dns_scope": { + Type: schema.TypeString, + Default: "DNS_SCOPE_UNSPECIFIED", + ValidateFunc: validation.StringInSlice([]string{"DNS_SCOPE_UNSPECIFIED", "CLUSTER_SCOPE", "VPC_SCOPE"}, false), + Description: `The scope of access to cluster DNS records.`, + Optional: true, + }, + "cluster_dns_domain": { + Type: schema.TypeString, + Description: `The suffix used for all cluster service records.`, + Optional: true, + }, + }, + }, + }, + "gateway_api_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Configuration for GKE Gateway API controller.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "channel": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice([]string{"CHANNEL_DISABLED", "CHANNEL_EXPERIMENTAL", "CHANNEL_STANDARD"}, false), + Description: `The Gateway API release channel to use for Gateway API.`, + }, + }, + }, + }, + }, + } +} diff --git a/google/services/container/resource_container_cluster_test.go b/google/services/container/resource_container_cluster_test.go index 66a6094806b..ecaf4c5a027 100644 --- a/google/services/container/resource_container_cluster_test.go +++ b/google/services/container/resource_container_cluster_test.go @@ -31,21 +31,24 @@ func TestAccContainerCluster_basic(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.primary", - ImportStateId: fmt.Sprintf("us-central1-a/%s", clusterName), - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportStateId: fmt.Sprintf("us-central1-a/%s", clusterName), + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { - ResourceName: "google_container_cluster.primary", - ImportStateId: fmt.Sprintf("%s/us-central1-a/%s", envvar.GetTestProjectFromEnv(), clusterName), - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportStateId: fmt.Sprintf("%s/us-central1-a/%s", envvar.GetTestProjectFromEnv(), clusterName), + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -64,9 +67,10 @@ func TestAccContainerCluster_networkingModeRoutes(t *testing.T) { Config: testAccContainerCluster_networkingModeRoutes(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -92,7 +96,7 @@ func TestAccContainerCluster_misc(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_misc_update(clusterName), @@ -101,7 +105,7 @@ func TestAccContainerCluster_misc(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, }, }) @@ -126,7 +130,7 @@ func TestAccContainerCluster_withAddons(t *testing.T) { ImportState: true, ImportStateVerify: true, // TODO: clean up this list in `4.0.0`, remove both `workload_identity_config` fields (same for below) - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_updateAddons(pid, clusterName), @@ -135,7 +139,7 @@ func TestAccContainerCluster_withAddons(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, // Issue with cloudrun_config addon: https://github.com/hashicorp/terraform-provider-google/issues/11943 // { @@ -145,12 +149,44 @@ func TestAccContainerCluster_withAddons(t *testing.T) { // ResourceName: "google_container_cluster.primary", // ImportState: true, // ImportStateVerify: true, - // ImportStateVerifyIgnore: []string{"min_master_version"}, + // ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, // }, }, }) } +func TestAccContainerCluster_withDeletionProtection(t *testing.T) { + t.Parallel() + clusterName := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(t, 10)) + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckContainerClusterDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccContainerCluster_withDeletionProtection(clusterName, "false"), + }, + { + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + { + Config: testAccContainerCluster_withDeletionProtection(clusterName, "true"), + }, + { + Config: testAccContainerCluster_withDeletionProtection(clusterName, "true"), + Destroy: true, + ExpectError: regexp.MustCompile("Cannot destroy cluster because deletion_protection is set to true. Set it to false to proceed with instance deletion."), + }, + { + Config: testAccContainerCluster_withDeletionProtection(clusterName, "false"), + }, + }, + }) +} + func TestAccContainerCluster_withNotificationConfig(t *testing.T) { t.Parallel() @@ -167,33 +203,37 @@ func TestAccContainerCluster_withNotificationConfig(t *testing.T) { Config: testAccContainerCluster_withNotificationConfig(clusterName, topic), }, { - ResourceName: "google_container_cluster.notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withNotificationConfig(clusterName, newTopic), }, { - ResourceName: "google_container_cluster.notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_disableNotificationConfig(clusterName), }, { - ResourceName: "google_container_cluster.notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withNotificationConfig(clusterName, newTopic), }, { - ResourceName: "google_container_cluster.notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -215,25 +255,28 @@ func TestAccContainerCluster_withFilteredNotificationConfig(t *testing.T) { Config: testAccContainerCluster_withFilteredNotificationConfig(clusterName, topic), }, { - ResourceName: "google_container_cluster.filtered_notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.filtered_notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withFilteredNotificationConfigUpdate(clusterName, newTopic), }, { - ResourceName: "google_container_cluster.filtered_notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.filtered_notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_disableFilteredNotificationConfig(clusterName, newTopic), }, { - ResourceName: "google_container_cluster.filtered_notification_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.filtered_notification_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -254,25 +297,28 @@ func TestAccContainerCluster_withConfidentialNodes(t *testing.T) { Config: testAccContainerCluster_withConfidentialNodes(clusterName, npName), }, { - ResourceName: "google_container_cluster.confidential_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.confidential_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_disableConfidentialNodes(clusterName, npName), }, { - ResourceName: "google_container_cluster.confidential_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.confidential_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withConfidentialNodes(clusterName, npName), }, { - ResourceName: "google_container_cluster.confidential_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.confidential_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -293,25 +339,28 @@ func TestAccContainerCluster_withILBSubsetting(t *testing.T) { Config: testAccContainerCluster_disableILBSubSetting(clusterName, npName), }, { - ResourceName: "google_container_cluster.confidential_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.confidential_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withILBSubSetting(clusterName, npName), }, { - ResourceName: "google_container_cluster.confidential_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.confidential_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_disableILBSubSetting(clusterName, npName), }, { - ResourceName: "google_container_cluster.confidential_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.confidential_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -334,9 +383,10 @@ func TestAccContainerCluster_withMasterAuthConfig_NoCert(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_master_auth_no_cert", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_master_auth_no_cert", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -359,9 +409,10 @@ func TestAccContainerCluster_withAuthenticatorGroupsConfig(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withAuthenticatorGroupsConfigUpdate(clusterName, orgDomain), @@ -371,9 +422,10 @@ func TestAccContainerCluster_withAuthenticatorGroupsConfig(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withAuthenticatorGroupsConfigUpdate2(clusterName), @@ -383,9 +435,10 @@ func TestAccContainerCluster_withAuthenticatorGroupsConfig(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -412,7 +465,7 @@ func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { ResourceName: "google_container_cluster.with_network_policy_enabled", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_removeNetworkPolicy(clusterName), @@ -425,7 +478,7 @@ func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { ResourceName: "google_container_cluster.with_network_policy_enabled", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_withNetworkPolicyDisabled(clusterName), @@ -438,7 +491,7 @@ func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { ResourceName: "google_container_cluster.with_network_policy_enabled", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_withNetworkPolicyConfigDisabled(clusterName), @@ -451,7 +504,7 @@ func TestAccContainerCluster_withNetworkPolicyEnabled(t *testing.T) { ResourceName: "google_container_cluster.with_network_policy_enabled", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_withNetworkPolicyConfigDisabled(clusterName), @@ -478,7 +531,7 @@ func TestAccContainerCluster_withReleaseChannelEnabled(t *testing.T) { ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "UNSPECIFIED"), @@ -488,7 +541,7 @@ func TestAccContainerCluster_withReleaseChannelEnabled(t *testing.T) { ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -510,7 +563,7 @@ func TestAccContainerCluster_withReleaseChannelEnabledDefaultVersion(t *testing. ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "REGULAR"), @@ -520,7 +573,7 @@ func TestAccContainerCluster_withReleaseChannelEnabledDefaultVersion(t *testing. ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withReleaseChannelEnabled(clusterName, "UNSPECIFIED"), @@ -530,7 +583,7 @@ func TestAccContainerCluster_withReleaseChannelEnabledDefaultVersion(t *testing. ImportStateIdPrefix: "us-central1-a/", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -581,17 +634,19 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksConfig(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_master_authorized_networks", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_master_authorized_networks", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withMasterAuthorizedNetworksConfig(clusterName, []string{"10.0.0.0/8", "8.8.8.8/32"}, ""), }, { - ResourceName: "google_container_cluster.with_master_authorized_networks", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_master_authorized_networks", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withMasterAuthorizedNetworksConfig(clusterName, []string{}, ""), @@ -601,17 +656,19 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksConfig(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_master_authorized_networks", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_master_authorized_networks", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_removeMasterAuthorizedNetworksConfig(clusterName), }, { - ResourceName: "google_container_cluster.with_master_authorized_networks", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_master_authorized_networks", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -638,7 +695,7 @@ func TestAccContainerCluster_withGcpPublicCidrsAccessEnabledToggle(t *testing.T) ResourceName: "google_container_cluster.with_gcp_public_cidrs_access_enabled", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withGcpPublicCidrsAccessEnabled(clusterName, "false"), @@ -651,7 +708,7 @@ func TestAccContainerCluster_withGcpPublicCidrsAccessEnabledToggle(t *testing.T) ResourceName: "google_container_cluster.with_gcp_public_cidrs_access_enabled", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withGcpPublicCidrsAccessEnabled(clusterName, "true"), @@ -680,6 +737,7 @@ resource "google_container_cluster" "with_gcp_public_cidrs_access_enabled" { master_authorized_networks_config { gcp_public_cidrs_access_enabled = %s } + deletion_protection = false } `, clusterName, flag) } @@ -696,6 +754,7 @@ resource "google_container_cluster" "with_gcp_public_cidrs_access_enabled" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -714,9 +773,10 @@ func TestAccContainerCluster_regional(t *testing.T) { Config: testAccContainerCluster_regional(clusterName), }, { - ResourceName: "google_container_cluster.regional", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.regional", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -737,9 +797,10 @@ func TestAccContainerCluster_regionalWithNodePool(t *testing.T) { Config: testAccContainerCluster_regionalWithNodePool(clusterName, npName), }, { - ResourceName: "google_container_cluster.regional", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.regional", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -759,17 +820,19 @@ func TestAccContainerCluster_regionalWithNodeLocations(t *testing.T) { Config: testAccContainerCluster_regionalNodeLocations(clusterName), }, { - ResourceName: "google_container_cluster.with_node_locations", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_locations", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_regionalUpdateNodeLocations(clusterName), }, { - ResourceName: "google_container_cluster.with_node_locations", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_locations", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -790,17 +853,19 @@ func TestAccContainerCluster_withPrivateClusterConfigBasic(t *testing.T) { Config: testAccContainerCluster_withPrivateClusterConfig(containerNetName, clusterName, false), }, { - ResourceName: "google_container_cluster.with_private_cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_private_cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withPrivateClusterConfig(containerNetName, clusterName, true), }, { - ResourceName: "google_container_cluster.with_private_cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_private_cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -840,9 +905,10 @@ func TestAccContainerCluster_withPrivateClusterConfigMissingCidrBlock_withAutopi Config: testAccContainerCluster_withPrivateClusterConfigMissingCidrBlock(containerNetName, clusterName, "us-central1", true), }, { - ResourceName: "google_container_cluster.with_private_cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_private_cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -862,17 +928,19 @@ func TestAccContainerCluster_withPrivateClusterConfigGlobalAccessEnabledOnly(t * Config: testAccContainerCluster_withPrivateClusterConfigGlobalAccessEnabledOnly(clusterName, true), }, { - ResourceName: "google_container_cluster.with_private_cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_private_cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withPrivateClusterConfigGlobalAccessEnabledOnly(clusterName, false), }, { - ResourceName: "google_container_cluster.with_private_cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_private_cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -895,9 +963,10 @@ func TestAccContainerCluster_withIntraNodeVisibility(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_intranode_visibility", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_intranode_visibility", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_updateIntraNodeVisibility(clusterName), @@ -906,9 +975,10 @@ func TestAccContainerCluster_withIntraNodeVisibility(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_intranode_visibility", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_intranode_visibility", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -931,7 +1001,7 @@ func TestAccContainerCluster_withVersion(t *testing.T) { ResourceName: "google_container_cluster.with_version", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -954,7 +1024,7 @@ func TestAccContainerCluster_updateVersion(t *testing.T) { ResourceName: "google_container_cluster.with_version", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_updateVersion(clusterName), @@ -963,7 +1033,7 @@ func TestAccContainerCluster_updateVersion(t *testing.T) { ResourceName: "google_container_cluster.with_version", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -983,17 +1053,19 @@ func TestAccContainerCluster_withNodeConfig(t *testing.T) { Config: testAccContainerCluster_withNodeConfig(clusterName), }, { - ResourceName: "google_container_cluster.with_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"node_config.0.taint", "deletion_protection"}, }, { Config: testAccContainerCluster_withNodeConfigUpdate(clusterName), }, { - ResourceName: "google_container_cluster.with_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"node_config.0.taint", "deletion_protection"}, }, }, }) @@ -1011,9 +1083,10 @@ func TestAccContainerCluster_withLoggingVariantInNodeConfig(t *testing.T) { Config: testAccContainerCluster_withLoggingVariantInNodeConfig(clusterName, "MAX_THROUGHPUT"), }, { - ResourceName: "google_container_cluster.with_logging_variant_in_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_logging_variant_in_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1032,9 +1105,10 @@ func TestAccContainerCluster_withLoggingVariantInNodePool(t *testing.T) { Config: testAccContainerCluster_withLoggingVariantInNodePool(clusterName, nodePoolName, "MAX_THROUGHPUT"), }, { - ResourceName: "google_container_cluster.with_logging_variant_in_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_logging_variant_in_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1052,25 +1126,28 @@ func TestAccContainerCluster_withLoggingVariantUpdates(t *testing.T) { Config: testAccContainerCluster_withLoggingVariantNodePoolDefault(clusterName, "DEFAULT"), }, { - ResourceName: "google_container_cluster.with_logging_variant_node_pool_default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_logging_variant_node_pool_default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withLoggingVariantNodePoolDefault(clusterName, "MAX_THROUGHPUT"), }, { - ResourceName: "google_container_cluster.with_logging_variant_node_pool_default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_logging_variant_node_pool_default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withLoggingVariantNodePoolDefault(clusterName, "DEFAULT"), }, { - ResourceName: "google_container_cluster.with_logging_variant_node_pool_default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_logging_variant_node_pool_default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1090,9 +1167,10 @@ func TestAccContainerCluster_withNodeConfigScopeAlias(t *testing.T) { Config: testAccContainerCluster_withNodeConfigScopeAlias(clusterName), }, { - ResourceName: "google_container_cluster.with_node_config_scope_alias", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_config_scope_alias", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1112,9 +1190,10 @@ func TestAccContainerCluster_withNodeConfigShieldedInstanceConfig(t *testing.T) Config: testAccContainerCluster_withNodeConfigShieldedInstanceConfig(clusterName), }, { - ResourceName: "google_container_cluster.with_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1140,9 +1219,10 @@ func TestAccContainerCluster_withNodeConfigReservationAffinity(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1175,9 +1255,10 @@ func TestAccContainerCluster_withNodeConfigReservationAffinitySpecific(t *testin ), }, { - ResourceName: "google_container_cluster.with_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1204,7 +1285,7 @@ func TestAccContainerCluster_withWorkloadMetadataConfig(t *testing.T) { ResourceName: "google_container_cluster.with_workload_metadata_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -1232,7 +1313,7 @@ func TestAccContainerCluster_withBootDiskKmsKey(t *testing.T) { ResourceName: "google_container_cluster.with_boot_disk_kms_key", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -1253,14 +1334,16 @@ func TestAccContainerCluster_network(t *testing.T) { Config: testAccContainerCluster_networkRef(clusterName, network), }, { - ResourceName: "google_container_cluster.with_net_ref_by_url", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_net_ref_by_url", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { - ResourceName: "google_container_cluster.with_net_ref_by_name", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_net_ref_by_name", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1280,9 +1363,10 @@ func TestAccContainerCluster_backend(t *testing.T) { Config: testAccContainerCluster_backendRef(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1303,9 +1387,10 @@ func TestAccContainerCluster_withNodePoolBasic(t *testing.T) { Config: testAccContainerCluster_withNodePoolBasic(clusterName, npName), }, { - ResourceName: "google_container_cluster.with_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1329,7 +1414,7 @@ func TestAccContainerCluster_withNodePoolUpdateVersion(t *testing.T) { ResourceName: "google_container_cluster.with_node_pool", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withNodePoolUpdateVersion(clusterName, npName), @@ -1338,7 +1423,7 @@ func TestAccContainerCluster_withNodePoolUpdateVersion(t *testing.T) { ResourceName: "google_container_cluster.with_node_pool", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -1361,9 +1446,10 @@ func TestAccContainerCluster_withNodePoolResize(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withNodePoolResize(clusterName, npName), @@ -1372,9 +1458,10 @@ func TestAccContainerCluster_withNodePoolResize(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1399,9 +1486,10 @@ func TestAccContainerCluster_withNodePoolAutoscaling(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withNodePoolUpdateAutoscaling(clusterName, npName), @@ -1411,9 +1499,10 @@ func TestAccContainerCluster_withNodePoolAutoscaling(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withNodePoolBasic(clusterName, npName), @@ -1423,9 +1512,10 @@ func TestAccContainerCluster_withNodePoolAutoscaling(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.with_node_pool", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1456,7 +1546,7 @@ func TestAccContainerCluster_withNodePoolCIA(t *testing.T) { ResourceName: "google_container_cluster.with_node_pool", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerRegionalClusterUpdate_withNodePoolCIA(clusterName, npName), @@ -1472,7 +1562,7 @@ func TestAccContainerCluster_withNodePoolCIA(t *testing.T) { ResourceName: "google_container_cluster.with_node_pool", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerRegionalCluster_withNodePoolBasic(clusterName, npName), @@ -1487,7 +1577,7 @@ func TestAccContainerCluster_withNodePoolCIA(t *testing.T) { ResourceName: "google_container_cluster.with_node_pool", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -1513,7 +1603,7 @@ func TestAccContainerCluster_withNodePoolNamePrefix(t *testing.T) { ResourceName: "google_container_cluster.with_node_pool_name_prefix", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"node_pool.0.name_prefix"}, + ImportStateVerifyIgnore: []string{"node_pool.0.name_prefix", "deletion_protection"}, }, }, }) @@ -1534,9 +1624,10 @@ func TestAccContainerCluster_withNodePoolMultiple(t *testing.T) { Config: testAccContainerCluster_withNodePoolMultiple(clusterName, npNamePrefix), }, { - ResourceName: "google_container_cluster.with_node_pool_multiple", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool_multiple", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1576,9 +1667,10 @@ func TestAccContainerCluster_withNodePoolNodeConfig(t *testing.T) { Config: testAccContainerCluster_withNodePoolNodeConfig(cluster, np), }, { - ResourceName: "google_container_cluster.with_node_pool_node_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_node_pool_node_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1599,9 +1691,10 @@ func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) { Config: testAccContainerCluster_withMaintenanceWindow(clusterName, "03:00"), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withMaintenanceWindow(clusterName, ""), @@ -1616,7 +1709,7 @@ func TestAccContainerCluster_withMaintenanceWindow(t *testing.T) { ImportStateVerify: true, // maintenance_policy.# = 0 is equivalent to no maintenance policy at all, // but will still cause an import diff - ImportStateVerifyIgnore: []string{"maintenance_policy.#"}, + ImportStateVerifyIgnore: []string{"maintenance_policy.#", "deletion_protection"}, }, }, }) @@ -1640,10 +1733,11 @@ func TestAccContainerCluster_withRecurringMaintenanceWindow(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withRecurringMaintenanceWindow(cluster, "", ""), @@ -1661,7 +1755,7 @@ func TestAccContainerCluster_withRecurringMaintenanceWindow(t *testing.T) { ImportStateVerify: true, // maintenance_policy.# = 0 is equivalent to no maintenance policy at all, // but will still cause an import diff - ImportStateVerifyIgnore: []string{"maintenance_policy.#"}, + ImportStateVerifyIgnore: []string{"maintenance_policy.#", "deletion_protection"}, }, }, }) @@ -1681,19 +1775,21 @@ func TestAccContainerCluster_withMaintenanceExclusionWindow(t *testing.T) { Config: testAccContainerCluster_withExclusion_RecurringMaintenanceWindow(cluster, "2019-01-01T00:00:00Z", "2019-01-02T00:00:00Z", "2019-05-01T00:00:00Z", "2019-05-02T00:00:00Z"), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withExclusion_DailyMaintenanceWindow(cluster, "2020-01-01T00:00:00Z", "2020-01-02T00:00:00Z"), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1720,10 +1816,11 @@ func TestAccContainerCluster_withMaintenanceExclusionOptions(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1750,10 +1847,11 @@ func TestAccContainerCluster_deleteMaintenanceExclusionOptions(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_NoExclusionOptions_RecurringMaintenanceWindow( @@ -1766,10 +1864,11 @@ func TestAccContainerCluster_deleteMaintenanceExclusionOptions(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1799,10 +1898,11 @@ func TestAccContainerCluster_updateMaintenanceExclusionOptions(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withExclusionOptions_RecurringMaintenanceWindow( @@ -1815,10 +1915,11 @@ func TestAccContainerCluster_updateMaintenanceExclusionOptions(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_updateExclusionOptions_RecurringMaintenanceWindow( @@ -1831,10 +1932,11 @@ func TestAccContainerCluster_updateMaintenanceExclusionOptions(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1854,28 +1956,31 @@ func TestAccContainerCluster_deleteExclusionWindow(t *testing.T) { Config: testAccContainerCluster_withExclusion_DailyMaintenanceWindow(cluster, "2020-01-01T00:00:00Z", "2020-01-02T00:00:00Z"), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withExclusion_RecurringMaintenanceWindow(cluster, "2019-01-01T00:00:00Z", "2019-01-02T00:00:00Z", "2019-05-01T00:00:00Z", "2019-05-02T00:00:00Z"), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withExclusion_NoMaintenanceWindow(cluster, "2020-01-01T00:00:00Z", "2020-01-02T00:00:00Z"), }, { - ResourceName: resourceName, - ImportStateIdPrefix: "us-central1-a/", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportStateIdPrefix: "us-central1-a/", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1895,9 +2000,10 @@ func TestAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(t *t Config: testAccContainerCluster_withIPAllocationPolicy_existingSecondaryRanges(containerNetName, clusterName), }, { - ResourceName: "google_container_cluster.with_ip_allocation_policy", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_ip_allocation_policy", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1917,9 +2023,10 @@ func TestAccContainerCluster_withIPAllocationPolicy_specificIPRanges(t *testing. Config: testAccContainerCluster_withIPAllocationPolicy_specificIPRanges(containerNetName, clusterName), }, { - ResourceName: "google_container_cluster.with_ip_allocation_policy", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_ip_allocation_policy", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1939,9 +2046,10 @@ func TestAccContainerCluster_withIPAllocationPolicy_specificSizes(t *testing.T) Config: testAccContainerCluster_withIPAllocationPolicy_specificSizes(containerNetName, clusterName), }, { - ResourceName: "google_container_cluster.with_ip_allocation_policy", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_ip_allocation_policy", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1969,7 +2077,7 @@ func TestAccContainerCluster_stackType_withDualStack(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -1997,7 +2105,7 @@ func TestAccContainerCluster_stackType_withSingleStack(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2025,7 +2133,7 @@ func TestAccContainerCluster_with_PodCIDROverprovisionDisabled(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2052,7 +2160,7 @@ func TestAccContainerCluster_nodeAutoprovisioning(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioning(clusterName, false, false), @@ -2065,7 +2173,7 @@ func TestAccContainerCluster_nodeAutoprovisioning(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2093,7 +2201,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaults(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaults(clusterName, true), @@ -2107,7 +2215,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaults(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaultsMinCpuPlatform(clusterName, !includeMinCpuPlatform), @@ -2116,7 +2224,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaults(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2136,9 +2244,10 @@ func TestAccContainerCluster_autoprovisioningDefaultsUpgradeSettings(t *testing. Config: testAccContainerCluster_autoprovisioningDefaultsUpgradeSettings(clusterName, 2, 1, "SURGE"), }, { - ResourceName: "google_container_cluster.with_autoprovisioning_upgrade_settings", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_autoprovisioning_upgrade_settings", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaultsUpgradeSettings(clusterName, 2, 1, "BLUE_GREEN"), @@ -2148,9 +2257,10 @@ func TestAccContainerCluster_autoprovisioningDefaultsUpgradeSettings(t *testing. Config: testAccContainerCluster_autoprovisioningDefaultsUpgradeSettingsWithBlueGreenStrategy(clusterName, "3.500s", "BLUE_GREEN"), }, { - ResourceName: "google_container_cluster.with_autoprovisioning_upgrade_settings", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_autoprovisioning_upgrade_settings", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2177,7 +2287,7 @@ func TestAccContainerCluster_nodeAutoprovisioningNetworkTags(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2197,17 +2307,19 @@ func TestAccContainerCluster_withShieldedNodes(t *testing.T) { Config: testAccContainerCluster_withShieldedNodes(clusterName, true), }, { - ResourceName: "google_container_cluster.with_shielded_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_shielded_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withShieldedNodes(clusterName, false), }, { - ResourceName: "google_container_cluster.with_shielded_nodes", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_shielded_nodes", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2232,7 +2344,7 @@ func TestAccContainerCluster_withAutopilot(t *testing.T) { ResourceName: "google_container_cluster.with_autopilot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2267,7 +2379,7 @@ func TestAccContainerClusterCustomServiceAccount_withAutopilot(t *testing.T) { ResourceName: "google_container_cluster.with_autopilot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2312,7 +2424,7 @@ func TestAccContainerCluster_withAutopilotNetworkTags(t *testing.T) { ResourceName: "google_container_cluster.with_autopilot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2336,7 +2448,7 @@ func TestAccContainerCluster_withWorkloadIdentityConfig(t *testing.T) { ResourceName: "google_container_cluster.with_workload_identity_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_updateWorkloadIdentityConfig(pid, clusterName, false), @@ -2345,7 +2457,7 @@ func TestAccContainerCluster_withWorkloadIdentityConfig(t *testing.T) { ResourceName: "google_container_cluster.with_workload_identity_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_updateWorkloadIdentityConfig(pid, clusterName, true), @@ -2354,7 +2466,7 @@ func TestAccContainerCluster_withWorkloadIdentityConfig(t *testing.T) { ResourceName: "google_container_cluster.with_workload_identity_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, }, }) @@ -2373,41 +2485,46 @@ func TestAccContainerCluster_withLoggingConfig(t *testing.T) { Config: testAccContainerCluster_basic(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withLoggingConfigEnabled(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withLoggingConfigDisabled(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withLoggingConfigUpdated(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_basic(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2429,7 +2546,7 @@ func TestAccContainerCluster_withMonitoringConfigAdvancedDatapathObservabilityCo ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigAdvancedDatapathObservabilityConfigDisabled(clusterName), @@ -2438,7 +2555,7 @@ func TestAccContainerCluster_withMonitoringConfigAdvancedDatapathObservabilityCo ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2460,7 +2577,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigEnabled(clusterName), @@ -2469,7 +2586,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigDisabled(clusterName), @@ -2478,7 +2595,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigUpdated(clusterName), @@ -2487,7 +2604,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigPrometheusUpdated(clusterName), @@ -2496,7 +2613,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, // Back to basic settings to test setting Prometheus on its own { @@ -2506,7 +2623,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigPrometheusOnly(clusterName), @@ -2515,7 +2632,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withMonitoringConfigPrometheusOnly2(clusterName), @@ -2524,7 +2641,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_basic(clusterName), @@ -2533,7 +2650,7 @@ func TestAccContainerCluster_withMonitoringConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2552,9 +2669,10 @@ func TestAccContainerCluster_withSoleTenantGroup(t *testing.T) { Config: testAccContainerCluster_withSoleTenantGroup(resourceName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2578,7 +2696,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsDiskSizeGb(t *testing.T ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaultsDiskSizeGb(clusterName, !includeDiskSizeGb), @@ -2587,7 +2705,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsDiskSizeGb(t *testing.T ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2611,7 +2729,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsDiskType(t *testing.T) ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaultsDiskType(clusterName, !includeDiskType), @@ -2620,7 +2738,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsDiskType(t *testing.T) ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2644,7 +2762,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsImageType(t *testing.T) ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaultsImageType(clusterName, !includeImageType), @@ -2653,7 +2771,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsImageType(t *testing.T) ResourceName: "google_container_cluster.with_autoprovisioning", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2683,6 +2801,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsBootDiskKmsKey(t *testi ImportStateVerify: true, ImportStateVerifyIgnore: []string{ "min_master_version", + "deletion_protection", "node_pool", // cluster_autoscaling (node auto-provisioning) creates new node pools automatically }, }, @@ -2707,7 +2826,7 @@ func TestAccContainerCluster_nodeAutoprovisioningDefaultsShieldedInstance(t *tes ResourceName: "google_container_cluster.nap_shielded_instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -2730,7 +2849,7 @@ func TestAccContainerCluster_autoprovisioningDefaultsManagement(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning_management", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autoprovisioningDefaultsManagement(clusterName, true, true), @@ -2739,12 +2858,15 @@ func TestAccContainerCluster_autoprovisioningDefaultsManagement(t *testing.T) { ResourceName: "google_container_cluster.with_autoprovisioning_management", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) } +// This resource originally cleaned up the dangling cluster directly, but now +// taints it, having Terraform clean it up during the next apply. This test +// name is now inexact, but is being preserved to maintain the test history. func TestAccContainerCluster_errorCleanDanglingCluster(t *testing.T) { t.Parallel() @@ -2765,15 +2887,16 @@ func TestAccContainerCluster_errorCleanDanglingCluster(t *testing.T) { Config: initConfig, }, { - ResourceName: "google_container_cluster.cidr_error_preempt", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.cidr_error_preempt", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: overlapConfig, ExpectError: regexp.MustCompile("Error waiting for creating GKE cluster"), }, - // If dangling cluster wasn't deleted, this plan will return an error + // If tainted cluster won't be deleted, this step will return an error { Config: overlapConfig, PlanOnly: true, @@ -2814,17 +2937,19 @@ func TestAccContainerCluster_withExternalIpsConfig(t *testing.T) { Config: testAccContainerCluster_withExternalIpsConfig(pid, clusterName, true), }, { - ResourceName: "google_container_cluster.with_external_ips_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_external_ips_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withExternalIpsConfig(pid, clusterName, false), }, { - ResourceName: "google_container_cluster.with_external_ips_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_external_ips_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2848,7 +2973,7 @@ func TestAccContainerCluster_withMeshCertificatesConfig(t *testing.T) { ResourceName: "google_container_cluster.with_mesh_certificates_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_updateMeshCertificatesConfig(pid, clusterName, true), @@ -2857,7 +2982,7 @@ func TestAccContainerCluster_withMeshCertificatesConfig(t *testing.T) { ResourceName: "google_container_cluster.with_mesh_certificates_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, { Config: testAccContainerCluster_updateMeshCertificatesConfig(pid, clusterName, false), @@ -2866,7 +2991,7 @@ func TestAccContainerCluster_withMeshCertificatesConfig(t *testing.T) { ResourceName: "google_container_cluster.with_mesh_certificates_config", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"remove_default_node_pool"}, + ImportStateVerifyIgnore: []string{"remove_default_node_pool", "deletion_protection"}, }, }, }) @@ -2887,17 +3012,19 @@ func TestAccContainerCluster_withCostManagementConfig(t *testing.T) { Config: testAccContainerCluster_updateCostManagementConfig(pid, clusterName, true), }, { - ResourceName: "google_container_cluster.with_cost_management_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_cost_management_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_updateCostManagementConfig(pid, clusterName, false), }, { - ResourceName: "google_container_cluster.with_cost_management_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_cost_management_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2925,17 +3052,19 @@ func TestAccContainerCluster_withDatabaseEncryption(t *testing.T) { Check: resource.TestCheckResourceAttrSet("data.google_kms_key_ring_iam_policy.test_key_ring_iam_policy", "policy_data"), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_basic(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2955,9 +3084,10 @@ func TestAccContainerCluster_withAdvancedDatapath(t *testing.T) { Config: testAccContainerCluster_withDatapathProvider(clusterName, "ADVANCED_DATAPATH"), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -2979,25 +3109,28 @@ func TestAccContainerCluster_withResourceUsageExportConfig(t *testing.T) { Config: testAccContainerCluster_withResourceUsageExportConfig(clusterName, datesetId, "true"), }, { - ResourceName: "google_container_cluster.with_resource_usage_export_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_resource_usage_export_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withResourceUsageExportConfig(clusterName, datesetId, "false"), }, { - ResourceName: "google_container_cluster.with_resource_usage_export_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_resource_usage_export_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_withResourceUsageExportConfigNoConfig(clusterName, datesetId), }, { - ResourceName: "google_container_cluster.with_resource_usage_export_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_resource_usage_export_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3021,9 +3154,10 @@ func TestAccContainerCluster_withMasterAuthorizedNetworksDisabled(t *testing.T) ), }, { - ResourceName: "google_container_cluster.with_private_cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_private_cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3044,9 +3178,10 @@ func TestAccContainerCluster_withEnableKubernetesAlpha(t *testing.T) { Config: testAccContainerCluster_withEnableKubernetesAlpha(clusterName, npName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3069,7 +3204,7 @@ func TestAccContainerCluster_withEnableKubernetesBetaAPIs(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -3092,7 +3227,7 @@ func TestAccContainerCluster_withEnableKubernetesBetaAPIsOnExistingCluster(t *te ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withEnableKubernetesBetaAPIs(clusterName), @@ -3101,7 +3236,7 @@ func TestAccContainerCluster_withEnableKubernetesBetaAPIsOnExistingCluster(t *te ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -3138,9 +3273,10 @@ func TestAccContainerCluster_withDNSConfig(t *testing.T) { Config: testAccContainerCluster_withDNSConfig(clusterName, "CLOUD_DNS", domainName, "VPC_SCOPE"), }, { - ResourceName: "google_container_cluster.with_dns_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_dns_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3165,7 +3301,7 @@ func TestAccContainerCluster_withGatewayApiConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withGatewayApiConfig(clusterName, "CHANNEL_STANDARD"), @@ -3174,7 +3310,7 @@ func TestAccContainerCluster_withGatewayApiConfig(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -3194,25 +3330,28 @@ func TestAccContainerCluster_withSecurityPostureConfig(t *testing.T) { Config: testAccContainerCluster_SetSecurityPostureToStandard(clusterName), }, { - ResourceName: "google_container_cluster.with_security_posture_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_security_posture_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_SetWorkloadVulnerabilityToStandard(clusterName), }, { - ResourceName: "google_container_cluster.with_security_posture_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_security_posture_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_DisableALL(clusterName), }, { - ResourceName: "google_container_cluster.with_security_posture_config", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.with_security_posture_config", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3227,6 +3366,7 @@ resource "google_container_cluster" "with_security_posture_config" { security_posture_config { mode = "BASIC" } + deletion_protection = false } `, resource_name) } @@ -3240,6 +3380,7 @@ resource "google_container_cluster" "with_security_posture_config" { security_posture_config { vulnerability_mode = "VULNERABILITY_BASIC" } + deletion_protection = false } `, resource_name) } @@ -3254,6 +3395,7 @@ resource "google_container_cluster" "with_security_posture_config" { mode = "DISABLED" vulnerability_mode = "VULNERABILITY_DISABLED" } + deletion_protection = false } `, resource_name) } @@ -3271,9 +3413,10 @@ func TestAccContainerCluster_autopilot_minimal(t *testing.T) { Config: testAccContainerCluster_autopilot_minimal(clusterName), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3295,7 +3438,7 @@ func TestAccContainerCluster_autopilot_net_admin(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autopilot_net_admin(clusterName, false), @@ -3304,7 +3447,7 @@ func TestAccContainerCluster_autopilot_net_admin(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_autopilot_net_admin(clusterName, true), @@ -3313,7 +3456,7 @@ func TestAccContainerCluster_autopilot_net_admin(t *testing.T) { ResourceName: "google_container_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -3332,9 +3475,10 @@ func TestAccContainerCluster_additional_pod_ranges_config_on_create(t *testing.T Config: testAccContainerCluster_additional_pod_ranges_config(clusterName, 1), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3353,41 +3497,46 @@ func TestAccContainerCluster_additional_pod_ranges_config_on_update(t *testing.T Config: testAccContainerCluster_additional_pod_ranges_config(clusterName, 0), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_additional_pod_ranges_config(clusterName, 2), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_additional_pod_ranges_config(clusterName, 0), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_additional_pod_ranges_config(clusterName, 1), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { Config: testAccContainerCluster_additional_pod_ranges_config(clusterName, 0), }, { - ResourceName: "google_container_cluster.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -3444,6 +3593,7 @@ resource "google_container_cluster" "primary" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } `, name) } @@ -3455,6 +3605,7 @@ resource "google_container_cluster" "primary" { location = "us-central1-a" initial_node_count = 1 networking_mode = "ROUTES" + deletion_protection = false } `, name) } @@ -3486,6 +3637,7 @@ resource "google_container_cluster" "primary" { binary_authorization { evaluation_mode = "PROJECT_SINGLETON_POLICY_ENFORCE" } +deletion_protection = false } `, name) } @@ -3518,6 +3670,7 @@ resource "google_container_cluster" "primary" { binary_authorization { evaluation_mode = "PROJECT_SINGLETON_POLICY_ENFORCE" } + deletion_protection = false } `, name) } @@ -3571,6 +3724,7 @@ resource "google_container_cluster" "primary" { enabled = false } } + deletion_protection = false } `, projectID, clusterName) } @@ -3625,6 +3779,7 @@ resource "google_container_cluster" "primary" { gcs_fuse_csi_driver_config { enabled = true } + deletion_protection = false } } `, projectID, clusterName) @@ -3663,6 +3818,7 @@ resource "google_container_cluster" "primary" { // load_balancer_type = "LOAD_BALANCER_TYPE_INTERNAL" // } // } +// deletion_protection = false // } // `, projectID, clusterName) // } @@ -3684,6 +3840,7 @@ resource "google_container_cluster" "notification_config" { topic = google_pubsub_topic.%s.id } } + deletion_protection = false } `, topic, topic, clusterName, topic) } @@ -3699,6 +3856,7 @@ resource "google_container_cluster" "notification_config" { enabled = false } } + deletion_protection = false } `, clusterName) } @@ -3724,6 +3882,7 @@ resource "google_container_cluster" "filtered_notification_config" { } } } + deletion_protection = false } `, topic, topic, clusterName, topic) } @@ -3749,6 +3908,7 @@ resource "google_container_cluster" "filtered_notification_config" { } } } + deletion_protection = false } `, topic, topic, clusterName, topic) } @@ -3771,6 +3931,7 @@ resource "google_container_cluster" "filtered_notification_config" { topic = google_pubsub_topic.%s.id } } + deletion_protection = false } `, topic, topic, clusterName, topic) } @@ -3795,6 +3956,7 @@ resource "google_container_cluster" "confidential_nodes" { confidential_nodes { enabled = true } + deletion_protection = false } `, clusterName, npName) } @@ -3819,6 +3981,7 @@ resource "google_container_cluster" "confidential_nodes" { confidential_nodes { enabled = false } + deletion_protection = false } `, clusterName, npName) } @@ -3841,6 +4004,7 @@ resource "google_container_cluster" "confidential_nodes" { } enable_l4_ilb_subsetting = true + deletion_protection = false } `, clusterName, npName) } @@ -3863,6 +4027,7 @@ resource "google_container_cluster" "confidential_nodes" { } enable_l4_ilb_subsetting = false + deletion_protection = false } `, clusterName, npName) } @@ -3885,10 +4050,23 @@ resource "google_container_cluster" "with_network_policy_enabled" { disabled = false } } + deletion_protection = false } `, clusterName) } +func testAccContainerCluster_withDeletionProtection(clusterName string, deletionProtection string) string { + return fmt.Sprintf(` +resource "google_container_cluster" "primary" { + name = "%s" + location = "us-central1-a" + initial_node_count = 1 + + deletion_protection = %s +} +`, clusterName, deletionProtection) +} + func testAccContainerCluster_withReleaseChannelEnabled(clusterName string, channel string) string { return fmt.Sprintf(` resource "google_container_cluster" "with_release_channel" { @@ -3899,6 +4077,7 @@ resource "google_container_cluster" "with_release_channel" { release_channel { channel = "%s" } + deletion_protection = false } `, clusterName, channel) } @@ -3915,6 +4094,7 @@ resource "google_container_cluster" "with_release_channel" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.release_channel_default_version["%s"] + deletion_protection = false } `, clusterName, channel) } @@ -3926,6 +4106,7 @@ resource "google_container_cluster" "with_network_policy_enabled" { location = "us-central1-a" initial_node_count = 1 remove_default_node_pool = true + deletion_protection = false } `, clusterName) } @@ -3941,6 +4122,7 @@ resource "google_container_cluster" "with_network_policy_enabled" { network_policy { enabled = false } + deletion_protection = false } `, clusterName) } @@ -3962,6 +4144,7 @@ resource "google_container_cluster" "with_network_policy_enabled" { disabled = true } } + deletion_protection = false } `, clusterName) } @@ -3976,6 +4159,7 @@ resource "google_container_cluster" "primary" { authenticator_groups_config { security_group = "gke-security-groups@%s" } + deletion_protection = false } `, name, orgDomain) } @@ -3990,6 +4174,7 @@ resource "google_container_cluster" "primary" { authenticator_groups_config { security_group = "" } + deletion_protection = false } `, name) } @@ -4018,6 +4203,7 @@ resource "google_container_cluster" "with_master_authorized_networks" { master_authorized_networks_config { %s } + deletion_protection = false } `, clusterName, cidrBlocks) } @@ -4028,6 +4214,7 @@ resource "google_container_cluster" "with_master_authorized_networks" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -4038,6 +4225,7 @@ resource "google_container_cluster" "regional" { name = "%s" location = "us-central1" initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -4068,7 +4256,7 @@ func TestAccContainerCluster_withPrivateEndpointSubnetwork(t *testing.T) { ResourceName: "google_container_cluster.with_private_endpoint_subnetwork", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -4113,6 +4301,7 @@ resource "google_container_cluster" "with_private_endpoint_subnetwork" { private_cluster_config { private_endpoint_subnetwork = google_compute_subnetwork.container_subnetwork2.name } + deletion_protection = false } `, containerNetName, s1Name, s1Cidr, s2Name, s2Cidr, clusterName) } @@ -4137,7 +4326,7 @@ func TestAccContainerCluster_withPrivateClusterConfigPrivateEndpointSubnetwork(t ResourceName: "google_container_cluster.with_private_endpoint_subnetwork", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -4189,6 +4378,7 @@ resource "google_container_cluster" "with_private_endpoint_subnetwork" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = false } `, containerNetName, clusterName) } @@ -4210,7 +4400,7 @@ func TestAccContainerCluster_withEnablePrivateEndpointToggle(t *testing.T) { ResourceName: "google_container_cluster.with_enable_private_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, { Config: testAccContainerCluster_withEnablePrivateEndpoint(clusterName, "false"), @@ -4219,7 +4409,7 @@ func TestAccContainerCluster_withEnablePrivateEndpointToggle(t *testing.T) { ResourceName: "google_container_cluster.with_enable_private_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"min_master_version"}, + ImportStateVerifyIgnore: []string{"min_master_version", "deletion_protection"}, }, }, }) @@ -4273,6 +4463,7 @@ resource "google_container_cluster" "with_enable_private_endpoint" { private_cluster_config { enable_private_endpoint = %s } + deletion_protection = false } `, clusterName, flag) } @@ -4286,6 +4477,7 @@ resource "google_container_cluster" "regional" { node_pool { name = "%s" } + deletion_protection = false } `, cluster, nodePool) } @@ -4301,6 +4493,7 @@ resource "google_container_cluster" "with_node_locations" { "us-central1-f", "us-central1-c", ] + deletion_protection = false } `, clusterName) } @@ -4316,6 +4509,7 @@ resource "google_container_cluster" "with_node_locations" { "us-central1-f", "us-central1-b", ] + deletion_protection = false } `, clusterName) } @@ -4327,6 +4521,7 @@ resource "google_container_cluster" "with_intranode_visibility" { location = "us-central1-a" initial_node_count = 1 enable_intranode_visibility = true + deletion_protection = false } `, clusterName) } @@ -4339,6 +4534,7 @@ resource "google_container_cluster" "with_intranode_visibility" { initial_node_count = 1 enable_intranode_visibility = false private_ipv6_google_access = "PRIVATE_IPV6_GOOGLE_ACCESS_BIDIRECTIONAL" + deletion_protection = false } `, clusterName) } @@ -4354,6 +4550,7 @@ resource "google_container_cluster" "with_version" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.latest_master_version initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -4369,6 +4566,7 @@ resource "google_container_cluster" "with_version" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.valid_master_versions[3] initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -4384,6 +4582,7 @@ resource "google_container_cluster" "with_master_auth_no_cert" { issue_client_certificate = false } } + deletion_protection = false } `, clusterName) } @@ -4400,6 +4599,7 @@ resource "google_container_cluster" "with_version" { min_master_version = data.google_container_engine_versions.central1a.latest_master_version node_version = data.google_container_engine_versions.central1a.valid_node_versions[1] initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -4449,6 +4649,7 @@ resource "google_container_cluster" "with_node_config" { // Updatable fields image_type = "COS_CONTAINERD" } + deletion_protection = false } `, clusterName) } @@ -4463,6 +4664,7 @@ resource "google_container_cluster" "with_logging_variant_in_node_config" { node_config { logging_variant = "%s" } + deletion_protection = false } `, clusterName, loggingVariant) } @@ -4480,6 +4682,7 @@ resource "google_container_cluster" "with_logging_variant_in_node_pool" { logging_variant = "%s" } } + deletion_protection = false } `, clusterName, nodePoolName, loggingVariant) } @@ -4496,6 +4699,7 @@ resource "google_container_cluster" "with_logging_variant_node_pool_default" { logging_variant = "%s" } } + deletion_protection = false } `, clusterName, loggingVariant) } @@ -4545,6 +4749,7 @@ resource "google_container_cluster" "with_node_config" { // Updatable fields image_type = "UBUNTU_CONTAINERD" } + deletion_protection = false } `, clusterName) } @@ -4561,6 +4766,7 @@ resource "google_container_cluster" "with_node_config_scope_alias" { disk_size_gb = 15 oauth_scopes = ["compute-rw", "storage-ro", "logging-write", "monitoring"] } + deletion_protection = false } `, clusterName) } @@ -4601,6 +4807,7 @@ resource "google_container_cluster" "with_node_config" { enable_integrity_monitoring = true } } + deletion_protection = false } `, clusterName) } @@ -4640,6 +4847,7 @@ resource "google_container_cluster" "with_node_config" { consume_reservation_type = "ANY_RESERVATION" } } + deletion_protection = false } `, clusterName) } @@ -4710,6 +4918,7 @@ resource "google_container_cluster" "with_node_config" { ] } } + deletion_protection = false depends_on = [google_project_service.container] } `, reservation, clusterName) @@ -4737,6 +4946,7 @@ resource "google_container_cluster" "with_workload_metadata_config" { mode = "GCE_METADATA" } } + deletion_protection = false } `, clusterName) } @@ -4759,6 +4969,7 @@ resource "google_container_cluster" "with_boot_disk_kms_key" { boot_disk_kms_key = "%s" } + deletion_protection = false } `, clusterName, kmsKeyName) } @@ -4776,6 +4987,7 @@ resource "google_container_cluster" "with_net_ref_by_url" { initial_node_count = 1 network = google_compute_network.container_network.self_link + deletion_protection = false } resource "google_container_cluster" "with_net_ref_by_name" { @@ -4784,6 +4996,7 @@ resource "google_container_cluster" "with_net_ref_by_name" { initial_node_count = 1 network = google_compute_network.container_network.name + deletion_protection = false } `, network, cluster, cluster) } @@ -4815,6 +5028,7 @@ resource "google_container_cluster" "with_autoprovisioning_management" { } } } + deletion_protection = false } `, clusterName, autoUpgrade, autoRepair) } @@ -4858,6 +5072,7 @@ resource "google_container_cluster" "primary" { "https://www.googleapis.com/auth/monitoring", ] } + deletion_protection = false } `, cluster, cluster, cluster) } @@ -4867,6 +5082,7 @@ func testAccContainerCluster_withNodePoolBasic(cluster, nodePool string) string resource "google_container_cluster" "with_node_pool" { name = "%s" location = "us-central1-a" + deletion_protection = false node_pool { name = "%s" @@ -4893,6 +5109,7 @@ resource "google_container_cluster" "with_node_pool" { initial_node_count = 2 version = data.google_container_engine_versions.central1a.valid_node_versions[2] } + deletion_protection = false } `, cluster, nodePool) } @@ -4914,6 +5131,7 @@ resource "google_container_cluster" "with_node_pool" { initial_node_count = 2 version = data.google_container_engine_versions.central1a.valid_node_versions[1] } + deletion_protection = false } `, cluster, nodePool) } @@ -4933,6 +5151,7 @@ resource "google_container_cluster" "with_node_pool" { name = "%s" node_count = 2 } + deletion_protection = false } `, cluster, nodePool) } @@ -4952,6 +5171,7 @@ resource "google_container_cluster" "with_node_pool" { name = "%s" node_count = 3 } + deletion_protection = false } `, cluster, nodePool) } @@ -4967,6 +5187,7 @@ resource "google_container_cluster" "with_autoprovisioning" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.latest_master_version initial_node_count = 1 + deletion_protection = false `, cluster) if autoprovisioning { config += ` @@ -5011,6 +5232,7 @@ resource "google_container_cluster" "with_autoprovisioning" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.latest_master_version initial_node_count = 1 + deletion_protection = false logging_service = "none" monitoring_service = "none" @@ -5079,6 +5301,7 @@ resource "google_container_cluster" "with_autoprovisioning" { %s } } + deletion_protection = false }`, cluster, minCpuPlatformCfg) } @@ -5124,6 +5347,7 @@ func testAccContainerCluster_autoprovisioningDefaultsUpgradeSettings(clusterName } } } + deletion_protection = false } `, clusterName, maxSurge, maxUnavailable, strategy, blueGreenSettings) } @@ -5161,6 +5385,7 @@ func testAccContainerCluster_autoprovisioningDefaultsUpgradeSettingsWithBlueGree } } } + deletion_protection = false } `, clusterName, strategy, duration, duration) } @@ -5194,6 +5419,7 @@ resource "google_container_cluster" "with_autoprovisioning" { %s } } + deletion_protection = false }`, cluster, DiskSizeGbCfg) } @@ -5226,6 +5452,7 @@ resource "google_container_cluster" "with_autoprovisioning" { %s } } + deletion_protection = false }`, cluster, DiskTypeCfg) } @@ -5258,6 +5485,7 @@ resource "google_container_cluster" "with_autoprovisioning" { %s } } + deletion_protection = false }`, cluster, imageTypeCfg) } @@ -5284,6 +5512,7 @@ resource "google_container_cluster" "nap_boot_disk_kms_key" { boot_disk_kms_key = "%s" } } + deletion_protection = false } `, clusterName, kmsKeyName) } @@ -5315,6 +5544,7 @@ resource "google_container_cluster" "nap_shielded_instance" { } } } + deletion_protection = false }`, cluster) } @@ -5332,6 +5562,7 @@ resource "google_container_cluster" "with_node_pool" { max_node_count = 3 } } + deletion_protection = false } `, cluster, np) } @@ -5350,6 +5581,7 @@ resource "google_container_cluster" "with_node_pool" { max_node_count = 5 } } + deletion_protection = false } `, cluster, np) } @@ -5374,6 +5606,7 @@ resource "google_container_cluster" "with_node_pool" { location_policy = "BALANCED" } } + deletion_protection = false } `, cluster, np) } @@ -5398,6 +5631,7 @@ resource "google_container_cluster" "with_node_pool" { location_policy = "ANY" } } + deletion_protection = false } `, cluster, np) } @@ -5417,6 +5651,7 @@ resource "google_container_cluster" "with_node_pool" { name = "%s" initial_node_count = 2 } + deletion_protection = false } `, cluster, nodePool) } @@ -5431,6 +5666,7 @@ resource "google_container_cluster" "with_node_pool_name_prefix" { name_prefix = "%s" node_count = 2 } + deletion_protection = false } `, cluster, npPrefix) } @@ -5450,6 +5686,7 @@ resource "google_container_cluster" "with_node_pool_multiple" { name = "%s-two" node_count = 3 } + deletion_protection = false } `, cluster, npPrefix, npPrefix) } @@ -5466,6 +5703,7 @@ resource "google_container_cluster" "with_node_pool_multiple" { name_prefix = "%s" node_count = 1 } + deletion_protection = false } `, cluster, npPrefix, npPrefix) } @@ -5500,6 +5738,7 @@ resource "google_container_cluster" "with_node_pool_node_config" { tags = ["foo", "bar"] } } + deletion_protection = false } `, cluster, np) } @@ -5521,6 +5760,7 @@ resource "google_container_cluster" "with_maintenance_window" { location = "us-central1-a" initial_node_count = 1 %s + deletion_protection = false } `, clusterName, maintenancePolicy) } @@ -5544,6 +5784,7 @@ resource "google_container_cluster" "with_recurring_maintenance_window" { location = "us-central1-a" initial_node_count = 1 %s + deletion_protection = false } `, clusterName, maintenancePolicy) @@ -5574,6 +5815,7 @@ resource "google_container_cluster" "with_maintenance_exclusion_window" { end_time = "%s" } } + deletion_protection = false } `, clusterName, w1startTime, w1endTime, w1startTime, w1endTime, w2startTime, w2endTime) } @@ -5609,6 +5851,7 @@ resource "google_container_cluster" "with_maintenance_exclusion_options" { } } } + deletion_protection = false } `, cclusterName, w1startTime, w1endTime, w1startTime, w1endTime, scope1, w2startTime, w2endTime, scope2) } @@ -5638,6 +5881,7 @@ resource "google_container_cluster" "with_maintenance_exclusion_options" { end_time = "%s" } } + deletion_protection = false } `, cclusterName, w1startTime, w1endTime, w1startTime, w1endTime, w2startTime, w2endTime) } @@ -5649,6 +5893,7 @@ resource "google_container_cluster" "with_maintenance_exclusion_options" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false maintenance_policy { recurring_window { @@ -5692,6 +5937,7 @@ resource "google_container_cluster" "with_maintenance_exclusion_window" { recurrence = "FREQ=DAILY" } } + deletion_protection = false } `, clusterName, w1startTime, w1endTime) } @@ -5714,6 +5960,7 @@ resource "google_container_cluster" "with_maintenance_exclusion_window" { end_time = "%s" } } + deletion_protection = false } `, clusterName, w1startTime, w1endTime) } @@ -5755,6 +6002,7 @@ resource "google_container_cluster" "with_ip_allocation_policy" { cluster_secondary_range_name = "pods" services_secondary_range_name = "services" } + deletion_protection = false } `, containerNetName, clusterName) } @@ -5787,6 +6035,7 @@ resource "google_container_cluster" "with_ip_allocation_policy" { cluster_ipv4_cidr_block = "10.0.0.0/16" services_ipv4_cidr_block = "10.1.0.0/16" } + deletion_protection = false } `, containerNetName, clusterName) } @@ -5819,6 +6068,7 @@ resource "google_container_cluster" "with_ip_allocation_policy" { cluster_ipv4_cidr_block = "/16" services_ipv4_cidr_block = "/22" } + deletion_protection = false } `, containerNetName, clusterName) } @@ -5856,6 +6106,7 @@ resource "google_container_cluster" "with_stack_type" { services_ipv4_cidr_block = "10.1.0.0/16" stack_type = "IPV4_IPV6" } + deletion_protection = false } `, containerNetName, clusterName) } @@ -5890,6 +6141,7 @@ resource "google_container_cluster" "with_stack_type" { services_ipv4_cidr_block = "10.1.0.0/16" stack_type = "IPV4" } + deletion_protection = false } `, containerNetName, clusterName) } @@ -5922,10 +6174,11 @@ resource "google_container_cluster" "with_pco_disabled" { ip_allocation_policy { cluster_ipv4_cidr_block = "10.1.0.0/16" services_ipv4_cidr_block = "10.2.0.0/16" - pod_cidr_overprovision_config { - disabled = true - } + pod_cidr_overprovision_config { + disabled = true + } } + deletion_protection = false } `, containerNetName, clusterName) } @@ -5954,6 +6207,7 @@ resource "google_container_cluster" "with_resource_usage_export_config" { dataset_id = google_bigquery_dataset.default.dataset_id } } + deletion_protection = false } `, datasetId, clusterName, enableMetering) } @@ -5970,6 +6224,7 @@ resource "google_container_cluster" "with_resource_usage_export_config" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } `, datasetId, clusterName) } @@ -6021,6 +6276,7 @@ resource "google_container_cluster" "with_private_cluster" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = false } `, containerNetName, clusterName, location, autopilotEnabled) } @@ -6076,6 +6332,7 @@ resource "google_container_cluster" "with_private_cluster" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = false } `, containerNetName, clusterName, masterGlobalAccessEnabled) } @@ -6092,6 +6349,7 @@ resource "google_container_cluster" "with_private_cluster" { enabled = %t } } + deletion_protection = false } `, clusterName, masterGlobalAccessEnabled) } @@ -6104,6 +6362,7 @@ resource "google_container_cluster" "with_shielded_nodes" { initial_node_count = 1 enable_shielded_nodes = %v + deletion_protection = false } `, clusterName, enabled) } @@ -6123,6 +6382,7 @@ resource "google_container_cluster" "with_workload_identity_config" { workload_pool = "${data.google_project.project.project_id}.svc.id.goog" } remove_default_node_pool = true + deletion_protection = false } `, projectID, clusterName) @@ -6152,6 +6412,7 @@ resource "google_container_cluster" "with_workload_identity_config" { initial_node_count = 1 remove_default_node_pool = true %s + deletion_protection = false } `, projectID, clusterName, workloadIdentityConfig) } @@ -6183,6 +6444,7 @@ resource "google_container_cluster" "cidr_error_preempt" { cluster_ipv4_cidr_block = "10.0.0.0/16" services_ipv4_cidr_block = "10.1.0.0/16" } + deletion_protection = false } `, containerNetName, clusterName) } @@ -6205,6 +6467,7 @@ resource "google_container_cluster" "cidr_error_overlap" { cluster_ipv4_cidr_block = "10.0.0.0/16" services_ipv4_cidr_block = "10.1.0.0/16" } + deletion_protection = false } `, initConfig, secondCluster) } @@ -6215,6 +6478,7 @@ resource "google_container_cluster" "with_resource_labels" { name = "invalid-gke-cluster" location = "%s" initial_node_count = 1 + deletion_protection = false } `, location) } @@ -6232,6 +6496,7 @@ func testAccContainerCluster_withExternalIpsConfig(projectID string, clusterName service_external_ips_config { enabled = %v } + deletion_protection = false }`, projectID, clusterName, enabled) } @@ -6252,6 +6517,7 @@ func testAccContainerCluster_withMeshCertificatesConfigEnabled(projectID string, mesh_certificates { enable_certificates = true } + deletion_protection = false } `, projectID, clusterName) } @@ -6270,9 +6536,10 @@ func testAccContainerCluster_updateMeshCertificatesConfig(projectID string, clus workload_identity_config { workload_pool = "${data.google_project.project.project_id}.svc.id.goog" } - mesh_certificates { + mesh_certificates { enable_certificates = %v - } + } + deletion_protection = false }`, projectID, clusterName, enabled) } @@ -6289,6 +6556,7 @@ func testAccContainerCluster_updateCostManagementConfig(projectID string, cluste cost_management_config { enabled = %v } + deletion_protection = false }`, projectID, clusterName, enabled) } @@ -6325,6 +6593,7 @@ resource "google_container_cluster" "primary" { state = "ENCRYPTED" key_name = "%[2]s" } + deletion_protection = false } `, kmsData.KeyRing.Name, kmsData.CryptoKey.Name, clusterName) } @@ -6343,6 +6612,7 @@ resource "google_container_cluster" "primary" { release_channel { channel = "RAPID" } + deletion_protection = false } `, clusterName, datapathProvider) } @@ -6391,6 +6661,7 @@ resource "google_container_cluster" "with_private_cluster" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = false } `, containerNetName, clusterName) } @@ -6410,6 +6681,7 @@ resource "google_container_cluster" "primary" { auto_upgrade = false } } + deletion_protection = false } `, cluster, np) } @@ -6425,6 +6697,7 @@ resource "google_container_cluster" "primary" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.release_channel_latest_version["STABLE"] initial_node_count = 1 + deletion_protection = false } `, clusterName) } @@ -6440,6 +6713,7 @@ resource "google_container_cluster" "primary" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.uscentral1a.release_channel_latest_version["STABLE"] initial_node_count = 1 + deletion_protection = false # This feature has been available since GKE 1.27, and currently the only # supported Beta API is authentication.k8s.io/v1beta1/selfsubjectreviews. @@ -6479,6 +6753,7 @@ resource "google_container_cluster" "primary" { enable_private_nodes = false master_ipv4_cidr_block = "10.42.0.0/28" } + deletion_protection = false } `, name) } @@ -6544,6 +6819,7 @@ resource "google_container_cluster" "with_autopilot" { name = "%s" location = "%s" enable_autopilot = %v + deletion_protection = false min_master_version = "latest" release_channel { channel = "RAPID" @@ -6587,6 +6863,7 @@ resource "google_container_cluster" "with_dns_config" { cluster_dns_domain = "%s" cluster_dns_scope = "%s" } + deletion_protection = false } `, clusterName, clusterDns, clusterDnsDomain, clusterDnsScope) } @@ -6605,6 +6882,7 @@ resource "google_container_cluster" "primary" { gateway_api_config { channel = "%s" } + deletion_protection = false } `, clusterName, gatewayApiChannel) } @@ -6621,6 +6899,7 @@ resource "google_container_cluster" "primary" { monitoring_config { enable_components = [ "SYSTEM_COMPONENTS" ] } + deletion_protection = false } `, name) } @@ -6634,6 +6913,7 @@ resource "google_container_cluster" "primary" { logging_config { enable_components = [] } + deletion_protection = false } `, name) } @@ -6650,6 +6930,7 @@ resource "google_container_cluster" "primary" { monitoring_config { enable_components = [ "SYSTEM_COMPONENTS" ] } + deletion_protection = false } `, name) } @@ -6667,6 +6948,7 @@ resource "google_container_cluster" "primary" { monitoring_config { enable_components = [ "SYSTEM_COMPONENTS", "APISERVER", "CONTROLLER_MANAGER", "SCHEDULER" ] } + deletion_protection = false } `, name) } @@ -6680,6 +6962,7 @@ resource "google_container_cluster" "primary" { monitoring_config { enable_components = [] } + deletion_protection = false } `, name) } @@ -6693,6 +6976,7 @@ resource "google_container_cluster" "primary" { monitoring_config { enable_components = [ "SYSTEM_COMPONENTS", "APISERVER", "CONTROLLER_MANAGER" ] } + deletion_protection = false } `, name) } @@ -6709,6 +6993,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = false } `, name) } @@ -6725,6 +7010,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = false } `, name) } @@ -6740,6 +7026,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = false } `, name) } @@ -6789,6 +7076,7 @@ resource "google_container_cluster" "primary" { relay_mode = "INTERNAL_VPC_LB" } } + deletion_protection = false } `, name, name) } @@ -6838,6 +7126,7 @@ resource "google_container_cluster" "primary" { relay_mode = "DISABLED" } } + deletion_protection = false } `, name, name) } @@ -6855,7 +7144,7 @@ resource "google_compute_node_group" "group" { zone = "us-central1-f" description = "example google_compute_node_group for Terraform Google Provider" - size = 1 + initial_size = 1 node_template = google_compute_node_template.soletenant-tmpl.id } @@ -6869,6 +7158,7 @@ resource "google_container_cluster" "primary" { disk_type = "pd-ssd" node_group = google_compute_node_group.group.name } + deletion_protection = false } `, name, name, name) } @@ -6888,6 +7178,7 @@ resource "google_container_cluster" "primary" { timeouts { create = "40s" } + deletion_protection = false }`, cluster, project, project) } @@ -6902,6 +7193,7 @@ resource "google_container_cluster" "primary" { workload_identity_config { workload_pool = "%s.svc.id.goog" } + deletion_protection = false }`, cluster, project, project) } @@ -6911,6 +7203,7 @@ resource "google_container_cluster" "primary" { name = "%s" location = "us-central1" enable_autopilot = true + deletion_protection = false }`, name) } @@ -6922,6 +7215,7 @@ resource "google_container_cluster" "primary" { enable_autopilot = true allow_net_admin = %t min_master_version = 1.27 + deletion_protection = false }`, name, enabled) } @@ -6946,9 +7240,10 @@ func TestAccContainerCluster_customPlacementPolicy(t *testing.T) { ), }, { - ResourceName: "google_container_cluster.cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -6982,6 +7277,7 @@ resource "google_container_cluster" "cluster" { policy_name = google_compute_resource_policy.policy.name } } + deletion_protection = false }`, policyName, cluster, np) } @@ -7061,6 +7357,7 @@ func testAccContainerCluster_additional_pod_ranges_config(name string, nameCount services_secondary_range_name = "gke-autopilot-services" %s } + deletion_protection = false } `, name, name, name, aprc) } diff --git a/google/services/container/resource_container_node_pool.go b/google/services/container/resource_container_node_pool.go index 3229007b191..1573c709ffd 100644 --- a/google/services/container/resource_container_node_pool.go +++ b/google/services/container/resource_container_node_pool.go @@ -45,6 +45,7 @@ func ResourceContainerNodePool() *schema.Resource { }, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, resourceNodeConfigEmptyGuestAccelerator, ), @@ -289,15 +290,15 @@ var schemaNodePool = map[string]*schema.Schema{ "auto_repair": { Type: schema.TypeBool, Optional: true, - Default: false, - Description: `Whether the nodes will be automatically repaired.`, + Default: true, + Description: `Whether the nodes will be automatically repaired. Enabled by default.`, }, "auto_upgrade": { Type: schema.TypeBool, Optional: true, - Default: false, - Description: `Whether the nodes will be automatically upgraded.`, + Default: true, + Description: `Whether the nodes will be automatically upgraded. Enabled by default.`, }, }, }, @@ -1054,7 +1055,7 @@ func flattenNodePool(d *schema.ResourceData, config *transport_tpg.Config, np *c "initial_node_count": np.InitialNodeCount, "node_locations": schema.NewSet(schema.HashString, tpgresource.ConvertStringArrToInterface(np.Locations)), "node_count": nodeCount, - "node_config": flattenNodeConfig(np.Config), + "node_config": flattenNodeConfig(np.Config, d.Get(prefix+"node_config")), "instance_group_urls": igmUrls, "managed_instance_group_urls": managedIgmUrls, "version": np.Version, diff --git a/google/services/container/resource_container_node_pool_test.go b/google/services/container/resource_container_node_pool_test.go index abcb205fecc..6f2a33209d2 100644 --- a/google/services/container/resource_container_node_pool_test.go +++ b/google/services/container/resource_container_node_pool_test.go @@ -216,7 +216,7 @@ func TestAccContainerNodePool_withNodeConfig(t *testing.T) { ImportStateVerify: true, // autoscaling.# = 0 is equivalent to no autoscaling at all, // but will still cause an import diff - ImportStateVerifyIgnore: []string{"autoscaling.#"}, + ImportStateVerifyIgnore: []string{"autoscaling.#", "node_config.0.taint"}, }, { Config: testAccContainerNodePool_withNodeConfigUpdate(cluster, nodePool), @@ -227,7 +227,7 @@ func TestAccContainerNodePool_withNodeConfig(t *testing.T) { ImportStateVerify: true, // autoscaling.# = 0 is equivalent to no autoscaling at all, // but will still cause an import diff - ImportStateVerifyIgnore: []string{"autoscaling.#"}, + ImportStateVerifyIgnore: []string{"autoscaling.#", "node_config.0.taint"}, }, }, }) @@ -547,6 +547,7 @@ resource "google_container_cluster" "cluster" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = false } resource "google_container_node_pool" "with_enable_private_nodes" { @@ -1186,6 +1187,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.latest_master_version initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1240,6 +1242,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" min_master_version = data.google_container_engine_versions.central1a.latest_master_version initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1287,6 +1290,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1335,6 +1339,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1400,6 +1405,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-f" initial_node_count = 1 min_master_version = "1.25" + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1441,9 +1447,10 @@ func TestAccContainerNodePool_compactPlacement(t *testing.T) { Config: testAccContainerNodePool_compactPlacement(cluster, np, "COMPACT"), }, { - ResourceName: "google_container_cluster.cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1455,6 +1462,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1508,6 +1516,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_compute_resource_policy" "policy" { @@ -1550,9 +1559,10 @@ func TestAccContainerNodePool_threadsPerCore(t *testing.T) { Config: testAccContainerNodePool_threadsPerCore(cluster, np, 1), }, { - ResourceName: "google_container_cluster.cluster", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_container_cluster.cluster", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, }, }, }) @@ -1564,6 +1574,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false node_config { machine_type = "c2-standard-4" @@ -1636,6 +1647,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1654,6 +1666,7 @@ resource "google_container_cluster" "with_logging_variant" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "with_logging_variant" { @@ -1679,6 +1692,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1735,6 +1749,7 @@ resource "google_container_cluster" "cluster" { master_authorized_networks_config { } + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1793,6 +1808,7 @@ resource "google_container_cluster" "cluster" { master_authorized_networks_config { } + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1811,6 +1827,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1828,6 +1845,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1845,6 +1863,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1861,6 +1880,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1883,6 +1903,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1" initial_node_count = 3 min_master_version = "1.27" + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1906,6 +1927,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1" initial_node_count = 3 min_master_version = "1.27" + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1934,6 +1956,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1" initial_node_count = 3 min_master_version = "1.27" + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1952,6 +1975,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1973,6 +1997,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -1999,6 +2024,7 @@ resource "google_container_cluster" "cluster" { "us-central1-b", "us-central1-c", ] + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2021,6 +2047,7 @@ resource "google_container_cluster" "cluster" { "us-central1-b", "us-central1-c", ] + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2041,6 +2068,7 @@ resource "google_container_cluster" "cluster" { release_channel { channel = "UNSPECIFIED" } + deletion_protection = false } resource "google_container_node_pool" "np_with_management" { @@ -2066,6 +2094,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np_with_node_config" { @@ -2120,6 +2149,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np_with_node_config" { @@ -2181,6 +2211,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "with_reservation_affinity" { @@ -2213,6 +2244,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_compute_reservation" "gce_reservation" { @@ -2263,6 +2295,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "with_workload_metadata_config" { @@ -2304,6 +2337,7 @@ resource "google_container_cluster" "cluster" { workload_identity_config { workload_pool = "${data.google_project.project.project_id}.svc.id.goog" } + deletion_protection = false } resource "google_container_node_pool" "with_workload_metadata_config" { @@ -2336,6 +2370,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } # cpu_manager_policy & cpu_cfs_quota_period cannot be blank if cpu_cfs_quota is set to true @@ -2357,6 +2392,7 @@ resource "google_container_node_pool" "with_kubelet_config" { "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ] + logging_variant = "DEFAULT" } } `, cluster, np, policy, quota, period, podPidsLimit) @@ -2396,6 +2432,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "with_linux_node_config" { @@ -2466,6 +2503,7 @@ resource "google_container_cluster" "cluster" { release_channel { channel = "RAPID" } + deletion_protection = false } resource "google_container_node_pool" "with_manual_pod_cidr" { @@ -2557,6 +2595,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1" initial_node_count = 1 min_master_version = "${data.google_container_engine_versions.central1.latest_master_version}" + deletion_protection = false } resource "google_container_node_pool" "with_upgrade_settings" { @@ -2580,6 +2619,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-c" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1c.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "np_with_gpu" { @@ -2629,6 +2669,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np_with_node_config_scope_alias" { @@ -2656,6 +2697,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2680,6 +2722,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2699,6 +2742,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-f" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2724,6 +2768,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-f" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2745,6 +2790,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-f" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2770,6 +2816,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-f" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2800,6 +2847,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-f" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2835,6 +2883,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -2858,6 +2907,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np1" { @@ -2882,6 +2932,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central1-a" initial_node_count = 3 + deletion_protection = false } resource "google_container_node_pool" "np1" { @@ -2917,7 +2968,7 @@ resource "google_compute_node_template" "soletenant-tmpl" { resource "google_compute_node_group" "nodes" { name = "tf-test-soletenant-group" zone = "us-central1-a" - size = 1 + initial_size = 1 node_template = google_compute_node_template.soletenant-tmpl.id } @@ -2926,6 +2977,7 @@ resource "google_container_cluster" "cluster" { location = "us-central1-a" initial_node_count = 1 min_master_version = data.google_container_engine_versions.central1a.latest_master_version + deletion_protection = false } resource "google_container_node_pool" "with_sole_tenant_config" { @@ -3002,6 +3054,7 @@ resource "google_container_cluster" "cluster" { } machine_type = "n2-standard-2" } + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -3031,6 +3084,7 @@ resource "google_container_cluster" "cluster" { } machine_type = "n2-standard-2" } + deletion_protection = false } resource "google_container_node_pool" "np" { @@ -3085,6 +3139,7 @@ resource "google_container_cluster" "cluster" { name = "%s" location = "us-central2-b" initial_node_count = 1 + deletion_protection = false } resource "google_container_node_pool" "regular_pool" { diff --git a/google/services/containeranalysis/resource_container_analysis_note.go b/google/services/containeranalysis/resource_container_analysis_note.go index da3d5afbc85..01ab4841e66 100644 --- a/google/services/containeranalysis/resource_container_analysis_note.go +++ b/google/services/containeranalysis/resource_container_analysis_note.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceContainerAnalysisNote() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "attestation_authority": { Type: schema.TypeList, @@ -544,9 +549,9 @@ func resourceContainerAnalysisNoteDelete(d *schema.ResourceData, meta interface{ func resourceContainerAnalysisNoteImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/notes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/notes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/containeranalysis/resource_container_analysis_occurrence.go b/google/services/containeranalysis/resource_container_analysis_occurrence.go index 33f2469745f..b3cbd3108b9 100644 --- a/google/services/containeranalysis/resource_container_analysis_occurrence.go +++ b/google/services/containeranalysis/resource_container_analysis_occurrence.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceContainerAnalysisOccurrence() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "attestation": { Type: schema.TypeList, @@ -483,9 +488,9 @@ func resourceContainerAnalysisOccurrenceDelete(d *schema.ResourceData, meta inte func resourceContainerAnalysisOccurrenceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/occurrences/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/occurrences/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/containeranalysis/resource_container_registry.go b/google/services/containeranalysis/resource_container_registry.go index abb9480b55c..80b8b2f1626 100644 --- a/google/services/containeranalysis/resource_container_registry.go +++ b/google/services/containeranalysis/resource_container_registry.go @@ -7,6 +7,7 @@ import ( "log" "strings" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -18,6 +19,10 @@ func ResourceContainerRegistry() *schema.Resource { Read: resourceContainerRegistryRead, Delete: resourceContainerRegistryDelete, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, diff --git a/google/services/containerattached/resource_container_attached_cluster.go b/google/services/containerattached/resource_container_attached_cluster.go index 37ab0c4aa71..625618d4fd0 100644 --- a/google/services/containerattached/resource_container_attached_cluster.go +++ b/google/services/containerattached/resource_container_attached_cluster.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -59,6 +60,11 @@ func ResourceContainerAttachedCluster() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetAnnotationsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "distribution": { Type: schema.TypeString, @@ -266,6 +272,12 @@ this is an Azure region.`, Computed: true, Description: `Output only. The time at which this cluster was created.`, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "errors": { Type: schema.TypeList, Computed: true, @@ -393,12 +405,6 @@ func resourceContainerAttachedClusterCreate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("fleet"); !tpgresource.IsEmptyValue(reflect.ValueOf(fleetProp)) && (ok || !reflect.DeepEqual(v, fleetProp)) { obj["fleet"] = fleetProp } - annotationsProp, err := expandContainerAttachedClusterAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } loggingConfigProp, err := expandContainerAttachedClusterLoggingConfig(d.Get("logging_config"), d, config) if err != nil { return err @@ -423,6 +429,12 @@ func resourceContainerAttachedClusterCreate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("binary_authorization"); !tpgresource.IsEmptyValue(reflect.ValueOf(binaryAuthorizationProp)) && (ok || !reflect.DeepEqual(v, binaryAuthorizationProp)) { obj["binaryAuthorization"] = binaryAuthorizationProp } + annotationsProp, err := expandContainerAttachedClusterEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ContainerAttachedBasePath}}projects/{{project}}/locations/{{location}}/attachedClusters?attached_cluster_id={{name}}") if err != nil { @@ -598,6 +610,9 @@ func resourceContainerAttachedClusterRead(d *schema.ResourceData, meta interface if err := d.Set("binary_authorization", flattenContainerAttachedClusterBinaryAuthorization(res["binaryAuthorization"], d, config)); err != nil { return fmt.Errorf("Error reading Cluster: %s", err) } + if err := d.Set("effective_annotations", flattenContainerAttachedClusterEffectiveAnnotations(res["annotations"], d, config)); err != nil { + return fmt.Errorf("Error reading Cluster: %s", err) + } return nil } @@ -642,12 +657,6 @@ func resourceContainerAttachedClusterUpdate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("fleet"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, fleetProp)) { obj["fleet"] = fleetProp } - annotationsProp, err := expandContainerAttachedClusterAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } loggingConfigProp, err := expandContainerAttachedClusterLoggingConfig(d.Get("logging_config"), d, config) if err != nil { return err @@ -672,6 +681,12 @@ func resourceContainerAttachedClusterUpdate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("binary_authorization"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, binaryAuthorizationProp)) { obj["binaryAuthorization"] = binaryAuthorizationProp } + annotationsProp, err := expandContainerAttachedClusterEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{ContainerAttachedBasePath}}projects/{{project}}/locations/{{location}}/attachedClusters/{{name}}") if err != nil { @@ -697,10 +712,6 @@ func resourceContainerAttachedClusterUpdate(d *schema.ResourceData, meta interfa updateMask = append(updateMask, "fleet") } - if d.HasChange("annotations") { - updateMask = append(updateMask, "annotations") - } - if d.HasChange("logging_config") { updateMask = append(updateMask, "loggingConfig") } @@ -716,6 +727,10 @@ func resourceContainerAttachedClusterUpdate(d *schema.ResourceData, meta interfa if d.HasChange("binary_authorization") { updateMask = append(updateMask, "binaryAuthorization") } + + if d.HasChange("effective_annotations") { + updateMask = append(updateMask, "annotations") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -845,9 +860,9 @@ func resourceContainerAttachedClusterDelete(d *schema.ResourceData, meta interfa func resourceContainerAttachedClusterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/attachedClusters/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/attachedClusters/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -961,7 +976,18 @@ func flattenContainerAttachedClusterKubernetesVersion(v interface{}, d *schema.R } func flattenContainerAttachedClusterAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenContainerAttachedClusterWorkloadIdentityConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1111,6 +1137,10 @@ func flattenContainerAttachedClusterBinaryAuthorizationEvaluationMode(v interfac return v } +func flattenContainerAttachedClusterEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandContainerAttachedClusterName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1195,17 +1225,6 @@ func expandContainerAttachedClusterFleetProject(v interface{}, d tpgresource.Ter return v, nil } -func expandContainerAttachedClusterAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandContainerAttachedClusterLoggingConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -1371,3 +1390,14 @@ func expandContainerAttachedClusterBinaryAuthorization(v interface{}, d tpgresou func expandContainerAttachedClusterBinaryAuthorizationEvaluationMode(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandContainerAttachedClusterEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/containerattached/resource_container_attached_cluster_generated_test.go b/google/services/containerattached/resource_container_attached_cluster_generated_test.go index 4ca6a9bbae2..0bb8f68d758 100644 --- a/google/services/containerattached/resource_container_attached_cluster_generated_test.go +++ b/google/services/containerattached/resource_container_attached_cluster_generated_test.go @@ -49,7 +49,7 @@ func TestAccContainerAttachedCluster_containerAttachedClusterBasicExample(t *tes ResourceName: "google_container_attached_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "annotations"}, }, }, }) @@ -101,7 +101,7 @@ func TestAccContainerAttachedCluster_containerAttachedClusterFullExample(t *test ResourceName: "google_container_attached_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "annotations"}, }, }, }) @@ -173,7 +173,7 @@ func TestAccContainerAttachedCluster_containerAttachedClusterIgnoreErrorsExample ResourceName: "google_container_attached_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "deletion_policy"}, + ImportStateVerifyIgnore: []string{"location", "deletion_policy", "annotations"}, }, }, }) diff --git a/google/services/containerattached/resource_container_attached_cluster_update_test.go b/google/services/containerattached/resource_container_attached_cluster_update_test.go index fdf84cc0cbc..3b6b06e55ca 100644 --- a/google/services/containerattached/resource_container_attached_cluster_update_test.go +++ b/google/services/containerattached/resource_container_attached_cluster_update_test.go @@ -28,7 +28,7 @@ func TestAccContainerAttachedCluster_update(t *testing.T) { ResourceName: "google_container_attached_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "annotations"}, }, { Config: testAccContainerAttachedCluster_containerAttachedCluster_update(context), @@ -37,7 +37,7 @@ func TestAccContainerAttachedCluster_update(t *testing.T) { ResourceName: "google_container_attached_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "annotations"}, }, { Config: testAccContainerAttachedCluster_containerAttachedCluster_destroy(context), @@ -46,7 +46,7 @@ func TestAccContainerAttachedCluster_update(t *testing.T) { ResourceName: "google_container_attached_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "annotations"}, }, }, }) diff --git a/google/services/containeraws/resource_container_aws_cluster.go b/google/services/containeraws/resource_container_aws_cluster.go index 609bd3e8f26..26ff0740eae 100644 --- a/google/services/containeraws/resource_container_aws_cluster.go +++ b/google/services/containeraws/resource_container_aws_cluster.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceContainerAwsCluster() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "authorization": { @@ -106,11 +111,13 @@ func ResourceContainerAwsCluster() *schema.Resource { Elem: ContainerAwsClusterNetworkingSchema(), }, - "annotations": { - Type: schema.TypeMap, + "binary_authorization": { + Type: schema.TypeList, + Computed: true, Optional: true, - Description: "Optional. Annotations on the cluster. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Key can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.", - Elem: &schema.Schema{Type: schema.TypeString}, + Description: "Configuration options for the Binary Authorization feature.", + MaxItems: 1, + Elem: ContainerAwsClusterBinaryAuthorizationSchema(), }, "description": { @@ -119,6 +126,12 @@ func ResourceContainerAwsCluster() *schema.Resource { Description: "Optional. A human readable description of this cluster. Cannot be longer than 255 UTF-8 encoded bytes.", }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + "project": { Type: schema.TypeString, Computed: true, @@ -128,6 +141,13 @@ func ResourceContainerAwsCluster() *schema.Resource { Description: "The project for the resource", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. Annotations on the cluster. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Key can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -529,6 +549,19 @@ func ContainerAwsClusterNetworkingSchema() *schema.Resource { } } +func ContainerAwsClusterBinaryAuthorizationSchema() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "evaluation_mode": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: "Mode of operation for Binary Authorization policy evaluation. Possible values: DISABLED, PROJECT_SINGLETON_POLICY_ENFORCE", + }, + }, + } +} + func ContainerAwsClusterWorkloadIdentityConfigSchema() *schema.Resource { return &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -561,16 +594,17 @@ func resourceContainerAwsClusterCreate(d *schema.ResourceData, meta interface{}) } obj := &containeraws.Cluster{ - Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), - AwsRegion: dcl.String(d.Get("aws_region").(string)), - ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), - Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), - Location: dcl.String(d.Get("location").(string)), - Name: dcl.String(d.Get("name").(string)), - Networking: expandContainerAwsClusterNetworking(d.Get("networking")), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), - Description: dcl.String(d.Get("description").(string)), - Project: dcl.String(project), + Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), + AwsRegion: dcl.String(d.Get("aws_region").(string)), + ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), + Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), + Location: dcl.String(d.Get("location").(string)), + Name: dcl.String(d.Get("name").(string)), + Networking: expandContainerAwsClusterNetworking(d.Get("networking")), + BinaryAuthorization: expandContainerAwsClusterBinaryAuthorization(d.Get("binary_authorization")), + Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Project: dcl.String(project), } id, err := obj.ID() @@ -618,16 +652,17 @@ func resourceContainerAwsClusterRead(d *schema.ResourceData, meta interface{}) e } obj := &containeraws.Cluster{ - Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), - AwsRegion: dcl.String(d.Get("aws_region").(string)), - ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), - Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), - Location: dcl.String(d.Get("location").(string)), - Name: dcl.String(d.Get("name").(string)), - Networking: expandContainerAwsClusterNetworking(d.Get("networking")), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), - Description: dcl.String(d.Get("description").(string)), - Project: dcl.String(project), + Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), + AwsRegion: dcl.String(d.Get("aws_region").(string)), + ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), + Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), + Location: dcl.String(d.Get("location").(string)), + Name: dcl.String(d.Get("name").(string)), + Networking: expandContainerAwsClusterNetworking(d.Get("networking")), + BinaryAuthorization: expandContainerAwsClusterBinaryAuthorization(d.Get("binary_authorization")), + Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Project: dcl.String(project), } userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) @@ -673,15 +708,21 @@ func resourceContainerAwsClusterRead(d *schema.ResourceData, meta interface{}) e if err = d.Set("networking", flattenContainerAwsClusterNetworking(res.Networking)); err != nil { return fmt.Errorf("error setting networking in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) + if err = d.Set("binary_authorization", flattenContainerAwsClusterBinaryAuthorization(res.BinaryAuthorization)); err != nil { + return fmt.Errorf("error setting binary_authorization in state: %s", err) } if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } + if err = d.Set("annotations", flattenContainerAwsClusterAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -717,16 +758,17 @@ func resourceContainerAwsClusterUpdate(d *schema.ResourceData, meta interface{}) } obj := &containeraws.Cluster{ - Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), - AwsRegion: dcl.String(d.Get("aws_region").(string)), - ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), - Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), - Location: dcl.String(d.Get("location").(string)), - Name: dcl.String(d.Get("name").(string)), - Networking: expandContainerAwsClusterNetworking(d.Get("networking")), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), - Description: dcl.String(d.Get("description").(string)), - Project: dcl.String(project), + Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), + AwsRegion: dcl.String(d.Get("aws_region").(string)), + ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), + Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), + Location: dcl.String(d.Get("location").(string)), + Name: dcl.String(d.Get("name").(string)), + Networking: expandContainerAwsClusterNetworking(d.Get("networking")), + BinaryAuthorization: expandContainerAwsClusterBinaryAuthorization(d.Get("binary_authorization")), + Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Project: dcl.String(project), } directive := tpgdclresource.UpdateDirective userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) @@ -769,16 +811,17 @@ func resourceContainerAwsClusterDelete(d *schema.ResourceData, meta interface{}) } obj := &containeraws.Cluster{ - Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), - AwsRegion: dcl.String(d.Get("aws_region").(string)), - ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), - Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), - Location: dcl.String(d.Get("location").(string)), - Name: dcl.String(d.Get("name").(string)), - Networking: expandContainerAwsClusterNetworking(d.Get("networking")), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), - Description: dcl.String(d.Get("description").(string)), - Project: dcl.String(project), + Authorization: expandContainerAwsClusterAuthorization(d.Get("authorization")), + AwsRegion: dcl.String(d.Get("aws_region").(string)), + ControlPlane: expandContainerAwsClusterControlPlane(d.Get("control_plane")), + Fleet: expandContainerAwsClusterFleet(d.Get("fleet")), + Location: dcl.String(d.Get("location").(string)), + Name: dcl.String(d.Get("name").(string)), + Networking: expandContainerAwsClusterNetworking(d.Get("networking")), + BinaryAuthorization: expandContainerAwsClusterBinaryAuthorization(d.Get("binary_authorization")), + Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), + Project: dcl.String(project), } log.Printf("[DEBUG] Deleting Cluster %q", d.Id()) @@ -1219,6 +1262,32 @@ func flattenContainerAwsClusterNetworking(obj *containeraws.ClusterNetworking) i } +func expandContainerAwsClusterBinaryAuthorization(o interface{}) *containeraws.ClusterBinaryAuthorization { + if o == nil { + return nil + } + objArr := o.([]interface{}) + if len(objArr) == 0 || objArr[0] == nil { + return nil + } + obj := objArr[0].(map[string]interface{}) + return &containeraws.ClusterBinaryAuthorization{ + EvaluationMode: containeraws.ClusterBinaryAuthorizationEvaluationModeEnumRef(obj["evaluation_mode"].(string)), + } +} + +func flattenContainerAwsClusterBinaryAuthorization(obj *containeraws.ClusterBinaryAuthorization) interface{} { + if obj == nil || obj.Empty() { + return nil + } + transformed := map[string]interface{}{ + "evaluation_mode": obj.EvaluationMode, + } + + return []interface{}{transformed} + +} + func flattenContainerAwsClusterWorkloadIdentityConfig(obj *containeraws.ClusterWorkloadIdentityConfig) interface{} { if obj == nil || obj.Empty() { return nil @@ -1232,3 +1301,18 @@ func flattenContainerAwsClusterWorkloadIdentityConfig(obj *containeraws.ClusterW return []interface{}{transformed} } + +func flattenContainerAwsClusterAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/containeraws/resource_container_aws_cluster_generated_test.go b/google/services/containeraws/resource_container_aws_cluster_generated_test.go index d552d51325f..8979a1cd4c4 100644 --- a/google/services/containeraws/resource_container_aws_cluster_generated_test.go +++ b/google/services/containeraws/resource_container_aws_cluster_generated_test.go @@ -63,7 +63,7 @@ func TestAccContainerAwsCluster_BasicHandWritten(t *testing.T) { ResourceName: "google_container_aws_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "annotations"}, }, { Config: testAccContainerAwsCluster_BasicHandWrittenUpdate0(context), @@ -72,7 +72,7 @@ func TestAccContainerAwsCluster_BasicHandWritten(t *testing.T) { ResourceName: "google_container_aws_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "annotations"}, }, }, }) @@ -107,7 +107,7 @@ func TestAccContainerAwsCluster_BasicEnumHandWritten(t *testing.T) { ResourceName: "google_container_aws_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "annotations"}, }, { Config: testAccContainerAwsCluster_BasicEnumHandWrittenUpdate0(context), @@ -116,7 +116,7 @@ func TestAccContainerAwsCluster_BasicEnumHandWritten(t *testing.T) { ResourceName: "google_container_aws_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "annotations"}, }, }, }) diff --git a/google/services/containeraws/resource_container_aws_node_pool.go b/google/services/containeraws/resource_container_aws_node_pool.go index 9a6cf33f4f3..0a6eaef8f5a 100644 --- a/google/services/containeraws/resource_container_aws_node_pool.go +++ b/google/services/containeraws/resource_container_aws_node_pool.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceContainerAwsNodePool() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "autoscaling": { @@ -112,11 +117,10 @@ func ResourceContainerAwsNodePool() *schema.Resource { Description: "The Kubernetes version to run on this node pool (e.g. `1.19.10-gke.1000`). You can list all supported versions on a given Google Cloud region by calling GetAwsServerConfig.", }, - "annotations": { + "effective_annotations": { Type: schema.TypeMap, - Optional: true, - Description: "Optional. Annotations on the node pool. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Key can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", }, "management": { @@ -137,6 +141,13 @@ func ResourceContainerAwsNodePool() *schema.Resource { Description: "The project for the resource", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. Annotations on the node pool. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Key can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -455,7 +466,7 @@ func resourceContainerAwsNodePoolCreate(d *schema.ResourceData, meta interface{} Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAwsNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -513,7 +524,7 @@ func resourceContainerAwsNodePoolRead(d *schema.ResourceData, meta interface{}) Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAwsNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -564,8 +575,8 @@ func resourceContainerAwsNodePoolRead(d *schema.ResourceData, meta interface{}) if err = d.Set("version", res.Version); err != nil { return fmt.Errorf("error setting version in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) } if err = d.Set("management", tpgresource.FlattenContainerAwsNodePoolManagement(res.Management, d, config)); err != nil { return fmt.Errorf("error setting management in state: %s", err) @@ -573,6 +584,9 @@ func resourceContainerAwsNodePoolRead(d *schema.ResourceData, meta interface{}) if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } + if err = d.Set("annotations", flattenContainerAwsNodePoolAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -610,7 +624,7 @@ func resourceContainerAwsNodePoolUpdate(d *schema.ResourceData, meta interface{} Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAwsNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -663,7 +677,7 @@ func resourceContainerAwsNodePoolDelete(d *schema.ResourceData, meta interface{} Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAwsNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -1040,3 +1054,18 @@ func flattenContainerAwsNodePoolManagement(obj *containeraws.NodePoolManagement) return []interface{}{transformed} } + +func flattenContainerAwsNodePoolAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/containeraws/resource_container_aws_node_pool_generated_test.go b/google/services/containeraws/resource_container_aws_node_pool_generated_test.go index 40419613a40..cd818bffbc6 100644 --- a/google/services/containeraws/resource_container_aws_node_pool_generated_test.go +++ b/google/services/containeraws/resource_container_aws_node_pool_generated_test.go @@ -63,7 +63,7 @@ func TestAccContainerAwsNodePool_BasicHandWritten(t *testing.T) { ResourceName: "google_container_aws_node_pool.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair", "annotations"}, }, { Config: testAccContainerAwsNodePool_BasicHandWrittenUpdate0(context), @@ -72,7 +72,7 @@ func TestAccContainerAwsNodePool_BasicHandWritten(t *testing.T) { ResourceName: "google_container_aws_node_pool.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair", "annotations"}, }, }, }) @@ -107,7 +107,7 @@ func TestAccContainerAwsNodePool_BasicEnumHandWritten(t *testing.T) { ResourceName: "google_container_aws_node_pool.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair", "annotations"}, }, { Config: testAccContainerAwsNodePool_BasicEnumHandWrittenUpdate0(context), @@ -116,7 +116,7 @@ func TestAccContainerAwsNodePool_BasicEnumHandWritten(t *testing.T) { ResourceName: "google_container_aws_node_pool.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "management.#", "management.0.%", "management.0.auto_repair", "annotations"}, }, }, }) diff --git a/google/services/containerazure/resource_container_azure_client.go b/google/services/containerazure/resource_container_azure_client.go index d63114a4ee3..549650fc60e 100644 --- a/google/services/containerazure/resource_container_azure_client.go +++ b/google/services/containerazure/resource_container_azure_client.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,9 @@ func ResourceContainerAzureClient() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "application_id": { diff --git a/google/services/containerazure/resource_container_azure_cluster.go b/google/services/containerazure/resource_container_azure_cluster.go index 00c30fa446c..a8b481a0a11 100644 --- a/google/services/containerazure/resource_container_azure_cluster.go +++ b/google/services/containerazure/resource_container_azure_cluster.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceContainerAzureCluster() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "authorization": { @@ -114,14 +119,6 @@ func ResourceContainerAzureCluster() *schema.Resource { Description: "The ARM ID of the resource group where the cluster resources are deployed. For example: `/subscriptions/*/resourceGroups/*`", }, - "annotations": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: "Optional. Annotations on the cluster. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Keys can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "azure_services_authentication": { Type: schema.TypeList, Optional: true, @@ -145,6 +142,13 @@ func ResourceContainerAzureCluster() *schema.Resource { Description: "Optional. A human readable description of this cluster. Cannot be longer than 255 UTF-8 encoded bytes.", }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + "project": { Type: schema.TypeString, Computed: true, @@ -154,6 +158,14 @@ func ResourceContainerAzureCluster() *schema.Resource { Description: "The project for the resource", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: "Optional. Annotations on the cluster. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Keys can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -520,10 +532,10 @@ func resourceContainerAzureClusterCreate(d *schema.ResourceData, meta interface{ Name: dcl.String(d.Get("name").(string)), Networking: expandContainerAzureClusterNetworking(d.Get("networking")), ResourceGroupId: dcl.String(d.Get("resource_group_id").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureServicesAuthentication: expandContainerAzureClusterAzureServicesAuthentication(d.Get("azure_services_authentication")), Client: dcl.String(d.Get("client").(string)), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Project: dcl.String(project), } @@ -580,10 +592,10 @@ func resourceContainerAzureClusterRead(d *schema.ResourceData, meta interface{}) Name: dcl.String(d.Get("name").(string)), Networking: expandContainerAzureClusterNetworking(d.Get("networking")), ResourceGroupId: dcl.String(d.Get("resource_group_id").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureServicesAuthentication: expandContainerAzureClusterAzureServicesAuthentication(d.Get("azure_services_authentication")), Client: dcl.String(d.Get("client").(string)), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Project: dcl.String(project), } @@ -633,9 +645,6 @@ func resourceContainerAzureClusterRead(d *schema.ResourceData, meta interface{}) if err = d.Set("resource_group_id", res.ResourceGroupId); err != nil { return fmt.Errorf("error setting resource_group_id in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) - } if err = d.Set("azure_services_authentication", flattenContainerAzureClusterAzureServicesAuthentication(res.AzureServicesAuthentication)); err != nil { return fmt.Errorf("error setting azure_services_authentication in state: %s", err) } @@ -645,9 +654,15 @@ func resourceContainerAzureClusterRead(d *schema.ResourceData, meta interface{}) if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } + if err = d.Set("annotations", flattenContainerAzureClusterAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -691,10 +706,10 @@ func resourceContainerAzureClusterUpdate(d *schema.ResourceData, meta interface{ Name: dcl.String(d.Get("name").(string)), Networking: expandContainerAzureClusterNetworking(d.Get("networking")), ResourceGroupId: dcl.String(d.Get("resource_group_id").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureServicesAuthentication: expandContainerAzureClusterAzureServicesAuthentication(d.Get("azure_services_authentication")), Client: dcl.String(d.Get("client").(string)), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Project: dcl.String(project), } directive := tpgdclresource.UpdateDirective @@ -746,10 +761,10 @@ func resourceContainerAzureClusterDelete(d *schema.ResourceData, meta interface{ Name: dcl.String(d.Get("name").(string)), Networking: expandContainerAzureClusterNetworking(d.Get("networking")), ResourceGroupId: dcl.String(d.Get("resource_group_id").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureServicesAuthentication: expandContainerAzureClusterAzureServicesAuthentication(d.Get("azure_services_authentication")), Client: dcl.String(d.Get("client").(string)), Description: dcl.String(d.Get("description").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Project: dcl.String(project), } @@ -1211,3 +1226,18 @@ func flattenContainerAzureClusterWorkloadIdentityConfig(obj *containerazure.Clus return []interface{}{transformed} } + +func flattenContainerAzureClusterAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/containerazure/resource_container_azure_cluster_generated_test.go b/google/services/containerazure/resource_container_azure_cluster_generated_test.go index 05a8a6be2c3..9e779d6672d 100644 --- a/google/services/containerazure/resource_container_azure_cluster_generated_test.go +++ b/google/services/containerazure/resource_container_azure_cluster_generated_test.go @@ -59,7 +59,7 @@ func TestAccContainerAzureCluster_BasicHandWritten(t *testing.T) { ResourceName: "google_container_azure_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "annotations"}, }, { Config: testAccContainerAzureCluster_BasicHandWrittenUpdate0(context), @@ -68,7 +68,7 @@ func TestAccContainerAzureCluster_BasicHandWritten(t *testing.T) { ResourceName: "google_container_azure_cluster.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"fleet.0.project"}, + ImportStateVerifyIgnore: []string{"fleet.0.project", "annotations"}, }, }, }) diff --git a/google/services/containerazure/resource_container_azure_node_pool.go b/google/services/containerazure/resource_container_azure_node_pool.go index 166901e8f1b..1c4a5fe267c 100644 --- a/google/services/containerazure/resource_container_azure_node_pool.go +++ b/google/services/containerazure/resource_container_azure_node_pool.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceContainerAzureNodePool() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetAnnotationsDiff, + ), Schema: map[string]*schema.Schema{ "autoscaling": { @@ -112,13 +117,6 @@ func ResourceContainerAzureNodePool() *schema.Resource { Description: "The Kubernetes version (e.g. `1.19.10-gke.1000`) running on this node pool.", }, - "annotations": { - Type: schema.TypeMap, - Optional: true, - Description: "Optional. Annotations on the node pool. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Keys can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "azure_availability_zone": { Type: schema.TypeString, Computed: true, @@ -127,6 +125,12 @@ func ResourceContainerAzureNodePool() *schema.Resource { Description: "Optional. The Azure availability zone of the nodes in this nodepool. When unspecified, it defaults to `1`.", }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: "All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.", + }, + "management": { Type: schema.TypeList, Computed: true, @@ -145,6 +149,13 @@ func ResourceContainerAzureNodePool() *schema.Resource { Description: "The project for the resource", }, + "annotations": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. Annotations on the node pool. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Keys can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "create_time": { Type: schema.TypeString, Computed: true, @@ -339,8 +350,8 @@ func resourceContainerAzureNodePoolCreate(d *schema.ResourceData, meta interface Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureAvailabilityZone: dcl.StringOrNil(d.Get("azure_availability_zone").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAzureNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -398,8 +409,8 @@ func resourceContainerAzureNodePoolRead(d *schema.ResourceData, meta interface{} Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureAvailabilityZone: dcl.StringOrNil(d.Get("azure_availability_zone").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAzureNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -450,18 +461,21 @@ func resourceContainerAzureNodePoolRead(d *schema.ResourceData, meta interface{} if err = d.Set("version", res.Version); err != nil { return fmt.Errorf("error setting version in state: %s", err) } - if err = d.Set("annotations", res.Annotations); err != nil { - return fmt.Errorf("error setting annotations in state: %s", err) - } if err = d.Set("azure_availability_zone", res.AzureAvailabilityZone); err != nil { return fmt.Errorf("error setting azure_availability_zone in state: %s", err) } + if err = d.Set("effective_annotations", res.Annotations); err != nil { + return fmt.Errorf("error setting effective_annotations in state: %s", err) + } if err = d.Set("management", tpgresource.FlattenContainerAzureNodePoolManagement(res.Management, d, config)); err != nil { return fmt.Errorf("error setting management in state: %s", err) } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } + if err = d.Set("annotations", flattenContainerAzureNodePoolAnnotations(res.Annotations, d)); err != nil { + return fmt.Errorf("error setting annotations in state: %s", err) + } if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } @@ -499,8 +513,8 @@ func resourceContainerAzureNodePoolUpdate(d *schema.ResourceData, meta interface Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureAvailabilityZone: dcl.StringOrNil(d.Get("azure_availability_zone").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAzureNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -553,8 +567,8 @@ func resourceContainerAzureNodePoolDelete(d *schema.ResourceData, meta interface Name: dcl.String(d.Get("name").(string)), SubnetId: dcl.String(d.Get("subnet_id").(string)), Version: dcl.String(d.Get("version").(string)), - Annotations: tpgresource.CheckStringMap(d.Get("annotations")), AzureAvailabilityZone: dcl.StringOrNil(d.Get("azure_availability_zone").(string)), + Annotations: tpgresource.CheckStringMap(d.Get("effective_annotations")), Management: expandContainerAzureNodePoolManagement(d.Get("management")), Project: dcl.String(project), } @@ -798,3 +812,18 @@ func flattenContainerAzureNodePoolManagement(obj *containerazure.NodePoolManagem return []interface{}{transformed} } + +func flattenContainerAzureNodePoolAnnotations(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("annotations").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/containerazure/resource_container_azure_node_pool_generated_test.go b/google/services/containerazure/resource_container_azure_node_pool_generated_test.go index 7d34a52a5aa..c6b10f5a853 100644 --- a/google/services/containerazure/resource_container_azure_node_pool_generated_test.go +++ b/google/services/containerazure/resource_container_azure_node_pool_generated_test.go @@ -59,7 +59,7 @@ func TestAccContainerAzureNodePool_BasicHandWritten(t *testing.T) { ResourceName: "google_container_azure_node_pool.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"management.#", "management.0.%", "management.0.auto_repair"}, + ImportStateVerifyIgnore: []string{"management.#", "management.0.%", "management.0.auto_repair", "annotations"}, }, { Config: testAccContainerAzureNodePool_BasicHandWrittenUpdate0(context), @@ -68,7 +68,7 @@ func TestAccContainerAzureNodePool_BasicHandWritten(t *testing.T) { ResourceName: "google_container_azure_node_pool.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"management.#", "management.0.%", "management.0.auto_repair"}, + ImportStateVerifyIgnore: []string{"management.#", "management.0.%", "management.0.auto_repair", "annotations"}, }, }, }) diff --git a/google/services/corebilling/resource_billing_project_info.go b/google/services/corebilling/resource_billing_project_info.go index e328136ba47..7b4ca75bf6c 100644 --- a/google/services/corebilling/resource_billing_project_info.go +++ b/google/services/corebilling/resource_billing_project_info.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceCoreBillingProjectInfo() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "billing_account": { Type: schema.TypeString, @@ -296,8 +301,8 @@ func resourceCoreBillingProjectInfoDelete(d *schema.ResourceData, meta interface func resourceCoreBillingProjectInfoImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P.+)$", + "^(?P.+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/corebilling/resource_billing_project_info_generated_test.go b/google/services/corebilling/resource_billing_project_info_generated_test.go index 0f341461d9c..6c02c550472 100644 --- a/google/services/corebilling/resource_billing_project_info_generated_test.go +++ b/google/services/corebilling/resource_billing_project_info_generated_test.go @@ -48,11 +48,6 @@ func TestAccCoreBillingProjectInfo_billingProjectInfoBasicExample(t *testing.T) { Config: testAccCoreBillingProjectInfo_billingProjectInfoBasicExample(context), }, - { - ResourceName: "google_billing_project_info.default", - ImportState: true, - ImportStateVerify: true, - }, }, }) } diff --git a/google/services/corebilling/resource_google_billing_project_info_test.go b/google/services/corebilling/resource_google_billing_project_info_test.go index 5c27f499629..3fbfba64242 100644 --- a/google/services/corebilling/resource_google_billing_project_info_test.go +++ b/google/services/corebilling/resource_google_billing_project_info_test.go @@ -31,6 +31,7 @@ func TestAccBillingProjectInfo_update(t *testing.T) { ResourceName: "google_billing_project_info.info", ImportState: true, ImportStateVerify: true, + ImportStateId: fmt.Sprintf("projects/%s", projectId), }, { Config: testAccBillingProjectInfo_basic(projectId, orgId, ""), @@ -39,6 +40,7 @@ func TestAccBillingProjectInfo_update(t *testing.T) { ResourceName: "google_billing_project_info.info", ImportState: true, ImportStateVerify: true, + ImportStateId: fmt.Sprintf("projects/%s", projectId), }, { Config: testAccBillingProjectInfo_basic(projectId, orgId, billingAccount), @@ -47,6 +49,7 @@ func TestAccBillingProjectInfo_update(t *testing.T) { ResourceName: "google_billing_project_info.info", ImportState: true, ImportStateVerify: true, + ImportStateId: fmt.Sprintf("projects/%s", projectId), }, }, }) diff --git a/google/services/databasemigrationservice/resource_database_migration_service_connection_profile.go b/google/services/databasemigrationservice/resource_database_migration_service_connection_profile.go index 243afa86f27..95966de222c 100644 --- a/google/services/databasemigrationservice/resource_database_migration_service_connection_profile.go +++ b/google/services/databasemigrationservice/resource_database_migration_service_connection_profile.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceDatabaseMigrationServiceConnectionProfile() *schema.Resource { Delete: schema.DefaultTimeout(60 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "connection_profile_id": { Type: schema.TypeString, @@ -359,10 +365,14 @@ For more information, see https://cloud.google.com/sql/docs/mysql/instance-setti Description: `The connection profile display name.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The resource labels for connection profile to use to annotate any related underlying resources such as Compute Engine VMs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `The resource labels for connection profile to use to annotate any related underlying resources such as Compute Engine VMs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { Type: schema.TypeString, @@ -553,6 +563,12 @@ If this field is used then the 'clientCertificate' field is mandatory.`, Computed: true, Description: `The database provider.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "error": { Type: schema.TypeList, Computed: true, @@ -590,6 +606,13 @@ If this field is used then the 'clientCertificate' field is mandatory.`, Computed: true, Description: `The current connection profile state.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -615,12 +638,6 @@ func resourceDatabaseMigrationServiceConnectionProfileCreate(d *schema.ResourceD } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandDatabaseMigrationServiceConnectionProfileLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } mysqlProp, err := expandDatabaseMigrationServiceConnectionProfileMysql(d.Get("mysql"), d, config) if err != nil { return err @@ -645,6 +662,12 @@ func resourceDatabaseMigrationServiceConnectionProfileCreate(d *schema.ResourceD } else if v, ok := d.GetOkExists("alloydb"); !tpgresource.IsEmptyValue(reflect.ValueOf(alloydbProp)) && (ok || !reflect.DeepEqual(v, alloydbProp)) { obj["alloydb"] = alloydbProp } + labelsProp, err := expandDatabaseMigrationServiceConnectionProfileEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DatabaseMigrationServiceBasePath}}projects/{{project}}/locations/{{location}}/connectionProfiles?connectionProfileId={{connection_profile_id}}") if err != nil { @@ -773,6 +796,12 @@ func resourceDatabaseMigrationServiceConnectionProfileRead(d *schema.ResourceDat if err := d.Set("alloydb", flattenDatabaseMigrationServiceConnectionProfileAlloydb(res["alloydb"], d, config)); err != nil { return fmt.Errorf("Error reading ConnectionProfile: %s", err) } + if err := d.Set("terraform_labels", flattenDatabaseMigrationServiceConnectionProfileTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConnectionProfile: %s", err) + } + if err := d.Set("effective_labels", flattenDatabaseMigrationServiceConnectionProfileEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConnectionProfile: %s", err) + } return nil } @@ -799,12 +828,6 @@ func resourceDatabaseMigrationServiceConnectionProfileUpdate(d *schema.ResourceD } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandDatabaseMigrationServiceConnectionProfileLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } mysqlProp, err := expandDatabaseMigrationServiceConnectionProfileMysql(d.Get("mysql"), d, config) if err != nil { return err @@ -829,6 +852,12 @@ func resourceDatabaseMigrationServiceConnectionProfileUpdate(d *schema.ResourceD } else if v, ok := d.GetOkExists("alloydb"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, alloydbProp)) { obj["alloydb"] = alloydbProp } + labelsProp, err := expandDatabaseMigrationServiceConnectionProfileEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DatabaseMigrationServiceBasePath}}projects/{{project}}/locations/{{location}}/connectionProfiles/{{connection_profile_id}}") if err != nil { @@ -842,10 +871,6 @@ func resourceDatabaseMigrationServiceConnectionProfileUpdate(d *schema.ResourceD updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("mysql") { updateMask = append(updateMask, "mysql") } @@ -861,6 +886,10 @@ func resourceDatabaseMigrationServiceConnectionProfileUpdate(d *schema.ResourceD if d.HasChange("alloydb") { updateMask = append(updateMask, "alloydb") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -956,9 +985,9 @@ func resourceDatabaseMigrationServiceConnectionProfileDelete(d *schema.ResourceD func resourceDatabaseMigrationServiceConnectionProfileImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/connectionProfiles/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/connectionProfiles/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -986,7 +1015,18 @@ func flattenDatabaseMigrationServiceConnectionProfileCreateTime(v interface{}, d } func flattenDatabaseMigrationServiceConnectionProfileLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDatabaseMigrationServiceConnectionProfileState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1589,19 +1629,27 @@ func flattenDatabaseMigrationServiceConnectionProfileAlloydbSettingsPrimaryInsta return v } -func expandDatabaseMigrationServiceConnectionProfileDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandDatabaseMigrationServiceConnectionProfileLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenDatabaseMigrationServiceConnectionProfileTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenDatabaseMigrationServiceConnectionProfileEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandDatabaseMigrationServiceConnectionProfileDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandDatabaseMigrationServiceConnectionProfileMysql(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -2498,3 +2546,14 @@ func expandDatabaseMigrationServiceConnectionProfileAlloydbSettingsPrimaryInstan func expandDatabaseMigrationServiceConnectionProfileAlloydbSettingsPrimaryInstanceSettingsPrivateIp(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDatabaseMigrationServiceConnectionProfileEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_generated_test.go b/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_generated_test.go index 3cb834c1382..81ad99bdc24 100644 --- a/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_generated_test.go +++ b/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_generated_test.go @@ -49,7 +49,7 @@ func TestAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceCo ResourceName: "google_database_migration_service_connection_profile.cloudsqlprofile", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "mysql.0.password", "mysql.0.ssl.0.ca_certificate", "mysql.0.ssl.0.client_certificate", "mysql.0.ssl.0.client_key"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "mysql.0.password", "mysql.0.ssl.0.ca_certificate", "mysql.0.ssl.0.client_certificate", "mysql.0.ssl.0.client_key", "labels", "terraform_labels"}, }, }, }) @@ -164,7 +164,7 @@ func TestAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceCo ResourceName: "google_database_migration_service_connection_profile.postgresprofile", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "postgresql.0.password", "postgresql.0.ssl.0.ca_certificate", "postgresql.0.ssl.0.client_certificate", "postgresql.0.ssl.0.client_key"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "postgresql.0.password", "postgresql.0.ssl.0.ca_certificate", "postgresql.0.ssl.0.client_certificate", "postgresql.0.ssl.0.client_key", "labels", "terraform_labels"}, }, }, }) @@ -221,93 +221,6 @@ resource "google_database_migration_service_connection_profile" "postgresprofile `, context) } -func TestAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceConnectionProfileAlloydbExample(t *testing.T) { - t.Parallel() - - context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "profile-alloydb"), - "random_suffix": acctest.RandString(t, 10), - } - - acctest.VcrTest(t, resource.TestCase{ - PreCheck: func() { acctest.AccTestPreCheck(t) }, - ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), - CheckDestroy: testAccCheckDatabaseMigrationServiceConnectionProfileDestroyProducer(t), - Steps: []resource.TestStep{ - { - Config: testAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceConnectionProfileAlloydbExample(context), - }, - { - ResourceName: "google_database_migration_service_connection_profile.alloydbprofile", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "alloydb.0.settings.0.initial_user.0.password"}, - }, - }, - }) -} - -func testAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceConnectionProfileAlloydbExample(context map[string]interface{}) string { - return acctest.Nprintf(` -data "google_project" "project" { -} - -data "google_compute_network" "default" { - name = "%{network_name}" -} - -resource "google_compute_global_address" "private_ip_alloc" { - name = "tf-test-private-ip-alloc%{random_suffix}" - address_type = "INTERNAL" - purpose = "VPC_PEERING" - prefix_length = 16 - network = data.google_compute_network.default.id -} - -resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] -} - - -resource "google_database_migration_service_connection_profile" "alloydbprofile" { - location = "us-central1" - connection_profile_id = "tf-test-my-profileid%{random_suffix}" - display_name = "tf-test-my-profileid%{random_suffix}_display" - labels = { - foo = "bar" - } - alloydb { - cluster_id = "tf-test-dbmsalloycluster%{random_suffix}" - settings { - initial_user { - user = "alloyuser%{random_suffix}" - password = "alloypass%{random_suffix}" - } - vpc_network = data.google_compute_network.default.id - labels = { - alloyfoo = "alloybar" - } - primary_instance_settings { - id = "priminstid" - machine_config { - cpu_count = 2 - } - database_flags = { - } - labels = { - alloysinstfoo = "allowinstbar" - } - } - } - } - - depends_on = [google_service_networking_connection.vpc_connection] -} -`, context) -} - func testAccCheckDatabaseMigrationServiceConnectionProfileDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_test.go b/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_test.go index 05f0184461b..4f0fd6b48ec 100644 --- a/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_test.go +++ b/google/services/databasemigrationservice/resource_database_migration_service_connection_profile_test.go @@ -27,7 +27,7 @@ func TestAccDatabaseMigrationServiceConnectionProfile_update(t *testing.T) { ResourceName: "google_database_migration_service_connection_profile.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "mysql.0.password"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "mysql.0.password", "labels", "terraform_labels"}, }, { Config: testAccDatabaseMigrationServiceConnectionProfile_update(suffix), @@ -36,7 +36,7 @@ func TestAccDatabaseMigrationServiceConnectionProfile_update(t *testing.T) { ResourceName: "google_database_migration_service_connection_profile.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "mysql.0.password"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "mysql.0.password", "labels", "terraform_labels"}, }, }, }) @@ -79,3 +79,70 @@ resource "google_database_migration_service_connection_profile" "default" { } `, context) } + +func TestAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceConnectionProfileAlloydb(t *testing.T) { + t.Parallel() + + context := map[string]interface{}{ + "random_suffix": acctest.RandString(t, 10), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "vpc-network-1"), + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDatabaseMigrationServiceConnectionProfileDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceConnectionProfileAlloydb(context), + }, + { + ResourceName: "google_database_migration_service_connection_profile.alloydbprofile", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "alloydb.0.settings.0.initial_user.0.password", "labels", "terraform_labels"}, + }, + }, + }) +} + +func testAccDatabaseMigrationServiceConnectionProfile_databaseMigrationServiceConnectionProfileAlloydb(context map[string]interface{}) string { + return acctest.Nprintf(` +data "google_compute_network" "default" { + name = "%{network_name}" +} + +resource "google_database_migration_service_connection_profile" "alloydbprofile" { + location = "us-central1" + connection_profile_id = "tf-test-my-profileid%{random_suffix}" + display_name = "tf-test-my-profileid%{random_suffix}_display" + labels = { + foo = "bar" + } + alloydb { + cluster_id = "tf-test-dbmsalloycluster%{random_suffix}" + settings { + initial_user { + user = "alloyuser%{random_suffix}" + password = "alloypass%{random_suffix}" + } + vpc_network = data.google_compute_network.default.id + labels = { + alloyfoo = "alloybar" + } + primary_instance_settings { + id = "priminstid" + machine_config { + cpu_count = 2 + } + database_flags = { + } + labels = { + alloysinstfoo = "allowinstbar" + } + } + } + } +} +`, context) +} diff --git a/google/services/datacatalog/resource_data_catalog_entry_group.go b/google/services/datacatalog/resource_data_catalog_entry_group.go index 9f6721f3691..0dcf00d7cbe 100644 --- a/google/services/datacatalog/resource_data_catalog_entry_group.go +++ b/google/services/datacatalog/resource_data_catalog_entry_group.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -49,6 +50,11 @@ func ResourceDataCatalogEntryGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "entry_group_id": { Type: schema.TypeString, diff --git a/google/services/datacatalog/resource_data_catalog_tag_template.go b/google/services/datacatalog/resource_data_catalog_tag_template.go index 89553a89f1e..664db492dd6 100644 --- a/google/services/datacatalog/resource_data_catalog_tag_template.go +++ b/google/services/datacatalog/resource_data_catalog_tag_template.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -109,6 +110,11 @@ func ResourceDataCatalogTagTemplate() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "fields": { Type: schema.TypeSet, diff --git a/google/services/datacatalog/resource_data_catalog_taxonomy.go b/google/services/datacatalog/resource_data_catalog_taxonomy.go index faf660e2d3d..31f8e1817c2 100644 --- a/google/services/datacatalog/resource_data_catalog_taxonomy.go +++ b/google/services/datacatalog/resource_data_catalog_taxonomy.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -49,6 +50,10 @@ func ResourceDataCatalogTaxonomy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/dataflow/resource_dataflow_flex_template_job_migrate.go b/google/services/dataflow/resource_dataflow_flex_template_job_migrate.go new file mode 100644 index 00000000000..6ca5ed3366f --- /dev/null +++ b/google/services/dataflow/resource_dataflow_flex_template_job_migrate.go @@ -0,0 +1,3 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package dataflow diff --git a/google/services/dataflow/resource_dataflow_job.go b/google/services/dataflow/resource_dataflow_job.go index a5dcf237e47..161241deedd 100644 --- a/google/services/dataflow/resource_dataflow_job.go +++ b/google/services/dataflow/resource_dataflow_job.go @@ -21,7 +21,8 @@ import ( "google.golang.org/api/googleapi" ) -const resourceDataflowJobGoogleProvidedLabelPrefix = "labels.goog-dataflow-provided" +const resourceDataflowJobGoogleLabelPrefix = "goog-dataflow-provided" +const resourceDataflowJobGoogleProvidedLabelPrefix = "labels." + resourceDataflowJobGoogleLabelPrefix var DataflowTerminatingStatesMap = map[string]struct{}{ "JOB_STATE_CANCELLING": {}, @@ -62,11 +63,20 @@ func ResourceDataflowJob() *schema.Resource { Update: schema.DefaultTimeout(10 * time.Minute), }, CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, resourceDataflowJobTypeCustomizeDiff, ), Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, }, + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceDataflowJobResourceV0().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceDataflowJobStateUpgradeV0, + Version: 0, + }, + }, Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -119,11 +129,24 @@ func ResourceDataflowJob() *schema.Resource { }, "labels": { - Type: schema.TypeMap, - Optional: true, - Computed: true, - DiffSuppressFunc: resourceDataflowJobLabelDiffSuppress, - Description: `User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.`, + Type: schema.TypeMap, + Optional: true, + Description: `User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "transform_name_mapping": { @@ -239,12 +262,8 @@ func resourceDataflowJobTypeCustomizeDiff(_ context.Context, d *schema.ResourceD if field == "on_delete" { continue } - // Labels map will likely have suppressed changes, so we check each key instead of the parent field - if field == "labels" { - if err := resourceDataflowJobIterateMapForceNew(field, d); err != nil { - return err - } - } else if d.HasChange(field) { + + if field != "labels" && field != "terraform_labels" && d.HasChange(field) { if err := d.ForceNew(field); err != nil { return err } @@ -344,9 +363,15 @@ func resourceDataflowJobRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("project", project); err != nil { return fmt.Errorf("Error setting project: %s", err) } - if err := d.Set("labels", job.Labels); err != nil { + if err := tpgresource.SetLabels(job.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(job.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", job.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if err := d.Set("kms_key_name", job.Environment.ServiceKmsKeyName); err != nil { return fmt.Errorf("Error setting kms_key_name: %s", err) } @@ -569,7 +594,7 @@ func resourceDataflowJobLaunchTemplate(config *transport_tpg.Config, project, re func resourceDataflowJobSetupEnv(d *schema.ResourceData, config *transport_tpg.Config) (dataflow.RuntimeEnvironment, error) { zone, _ := tpgresource.GetZone(d, config) - labels := tpgresource.ExpandStringMap(d, "labels") + labels := tpgresource.ExpandStringMap(d, "effective_labels") additionalExperiments := tpgresource.ConvertStringSet(d.Get("additional_experiments").(*schema.Set)) @@ -623,9 +648,8 @@ func resourceDataflowJobIsVirtualUpdate(d *schema.ResourceData, resourceSchema m if field == "on_delete" { continue } - // Labels map will likely have suppressed changes, so we check each key instead of the parent field - if (field == "labels" && resourceDataflowJobIterateMapHasChange(field, d)) || - (field != "labels" && d.HasChange(field)) { + + if field != "labels" && field != "terraform_labels" && d.HasChange(field) { return false } } diff --git a/google/services/dataflow/resource_dataflow_job_migrate.go b/google/services/dataflow/resource_dataflow_job_migrate.go new file mode 100644 index 00000000000..92d755af5be --- /dev/null +++ b/google/services/dataflow/resource_dataflow_job_migrate.go @@ -0,0 +1,182 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package dataflow + +import ( + "context" + + "github.com/hashicorp/terraform-provider-google/google/tpgresource" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" +) + +func resourceDataflowJobResourceV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `A unique name for the resource, required by Dataflow.`, + }, + + "template_gcs_path": { + Type: schema.TypeString, + Required: true, + Description: `The Google Cloud Storage path to the Dataflow job template.`, + }, + + "temp_gcs_location": { + Type: schema.TypeString, + Required: true, + Description: `A writeable location on Google Cloud Storage for the Dataflow job to dump its temporary data.`, + }, + + "zone": { + Type: schema.TypeString, + Optional: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The zone in which the created job should run. If it is not provided, the provider zone is used.`, + }, + + "region": { + Type: schema.TypeString, + Optional: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The region in which the created job should run.`, + }, + + "max_workers": { + Type: schema.TypeInt, + Optional: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The number of workers permitted to work on the job. More workers may improve processing speed at additional cost.`, + }, + + "parameters": { + Type: schema.TypeMap, + Optional: true, + Description: `Key/Value pairs to be passed to the Dataflow job (as used in the template).`, + }, + + "labels": { + Type: schema.TypeMap, + Optional: true, + Computed: true, + DiffSuppressFunc: resourceDataflowJobLabelDiffSuppress, + Description: `User labels to be specified for the job. Keys and values should follow the restrictions specified in the labeling restrictions page. NOTE: Google-provided Dataflow templates often provide default labels that begin with goog-dataflow-provided. Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply.`, + }, + + "transform_name_mapping": { + Type: schema.TypeMap, + Optional: true, + Description: `Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job.`, + }, + + "on_delete": { + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice([]string{"cancel", "drain"}, false), + Optional: true, + Default: "drain", + Description: `One of "drain" or "cancel". Specifies behavior of deletion during terraform destroy.`, + }, + + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + // ForceNew applies to both stream and batch jobs + ForceNew: true, + Description: `The project in which the resource belongs.`, + }, + + "state": { + Type: schema.TypeString, + Computed: true, + Description: `The current state of the resource, selected from the JobState enum.`, + }, + "type": { + Type: schema.TypeString, + Computed: true, + Description: `The type of this job, selected from the JobType enum.`, + }, + "service_account_email": { + Type: schema.TypeString, + Optional: true, + Description: `The Service Account email used to create the job.`, + }, + + "network": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The network to which VMs will be assigned. If it is not provided, "default" will be used.`, + }, + + "subnetwork": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The subnetwork to which VMs will be assigned. Should be of the form "regions/REGION/subnetworks/SUBNETWORK".`, + }, + + "machine_type": { + Type: schema.TypeString, + Optional: true, + Description: `The machine type to use for the job.`, + }, + + "kms_key_name": { + Type: schema.TypeString, + Optional: true, + Description: `The name for the Cloud KMS key for the job. Key format is: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY`, + }, + + "ip_configuration": { + Type: schema.TypeString, + Optional: true, + Description: `The configuration for VM IPs. Options are "WORKER_IP_PUBLIC" or "WORKER_IP_PRIVATE".`, + ValidateFunc: validation.StringInSlice([]string{"WORKER_IP_PUBLIC", "WORKER_IP_PRIVATE", ""}, false), + }, + + "additional_experiments": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Description: `List of experiments that should be used by the job. An example value is ["enable_stackdriver_agent_metrics"].`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + + "job_id": { + Type: schema.TypeString, + Computed: true, + Description: `The unique ID of this job.`, + }, + + "enable_streaming_engine": { + Type: schema.TypeBool, + Optional: true, + Description: `Indicates if the job should use the streaming engine feature.`, + }, + + "skip_wait_on_job_termination": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `If true, treat DRAINING and CANCELLING as terminal job states and do not wait for further changes before removing from terraform state and moving on. WARNING: this will lead to job name conflicts if you do not ensure that the job names are different, e.g. by embedding a release ID or by using a random_id.`, + }, + }, + UseJSONNumber: true, + } +} + +func ResourceDataflowJobStateUpgradeV0(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + return tpgresource.LabelsStateUpgrade(rawState, resourceDataflowJobGoogleLabelPrefix) +} diff --git a/google/services/dataflow/resource_dataflow_job_test.go b/google/services/dataflow/resource_dataflow_job_test.go index 501c22efc83..75a174df8d2 100644 --- a/google/services/dataflow/resource_dataflow_job_test.go +++ b/google/services/dataflow/resource_dataflow_job_test.go @@ -248,7 +248,127 @@ func TestAccDataflowJob_withLabels(t *testing.T) { ResourceName: "google_dataflow_job.with_labels", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"on_delete", "parameters", "skip_wait_on_job_termination", "state"}, + ImportStateVerifyIgnore: []string{"on_delete", "parameters", "skip_wait_on_job_termination", "state", "labels", "terraform_labels"}, + }, + }, + }) +} + +func TestAccDataflowJob_withProviderDefaultLabels(t *testing.T) { + // The test failed if VCR testing is enabled, because the cached provider config is used. + // With the cached provider config, any changes in the provider default labels will not be applied. + acctest.SkipIfVcr(t) + t.Parallel() + + randStr := acctest.RandString(t, 10) + bucket := "tf-test-dataflow-gcs-" + randStr + job := "tf-test-dataflow-job-" + randStr + zone := "us-central1-f" + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccCheckDataflowJobDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testAccDataflowJob_withProviderDefaultLabels(bucket, job), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.%", "2"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_key1", "default_value1"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_dataflow_job.with_labels", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"on_delete", "parameters", "skip_wait_on_job_termination", "state", "labels", "terraform_labels"}, + }, + { + Config: testAccDataflowJob_resourceLabelsOverridesProviderDefaultLabels(bucket, job), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.%", "3"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_expiration_ms", "3600000"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_key1", "value1"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_dataflow_job.with_labels", + ImportState: true, + ImportStateVerify: true, + // The labels field in the state is decided by the configuration. + // During importing, the configuration is unavailable, so the labels field in the state after importing is empty. + ImportStateVerifyIgnore: []string{"on_delete", "parameters", "skip_wait_on_job_termination", "state", "labels", "terraform_labels"}, + }, + { + Config: testAccDataflowJob_moveResourceLabelToProviderDefaultLabels(bucket, job), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.%", "2"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_expiration_ms", "3600000"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_key1", "value1"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_dataflow_job.with_labels", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"on_delete", "parameters", "skip_wait_on_job_termination", "state", "labels", "terraform_labels"}, + }, + { + Config: testAccDataflowJob_resourceLabelsOverridesProviderDefaultLabels(bucket, job), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.%", "3"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_expiration_ms", "3600000"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "labels.default_key1", "value1"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.%", "3"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_key1", "value1"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.env", "foo"), + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "terraform_labels.default_expiration_ms", "3600000"), + + resource.TestCheckResourceAttr("google_dataflow_job.with_labels", "effective_labels.%", "3"), + ), + }, + { + ResourceName: "google_dataflow_job.with_labels", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"on_delete", "parameters", "skip_wait_on_job_termination", "state", "labels", "terraform_labels"}, + }, + { + Config: testAccDataflowJob_zone(bucket, job, zone), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckNoResourceAttr("google_dataflow_job.with_labels", "labels.%"), + resource.TestCheckNoResourceAttr("google_dataflow_job.with_labels", "effective_labels.%"), + ), + }, + { + ResourceName: "google_dataflow_job.with_labels", + ImportState: true, + ImportStateVerify: true, }, }, }) @@ -969,7 +1089,107 @@ resource "google_dataflow_job" "with_labels" { on_delete = "cancel" } `, bucket, job, labelKey, labelVal, testDataflowJobTemplateWordCountUrl, testDataflowJobSampleFileUrl) +} + +func testAccDataflowJob_withProviderDefaultLabels(bucket, job string) string { + return fmt.Sprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + +resource "google_storage_bucket" "temp" { + name = "%s" + location = "US" + force_destroy = true +} + +resource "google_dataflow_job" "with_labels" { + name = "%s" + + labels = { + env = "foo" + default_expiration_ms = 3600000 + } + + template_gcs_path = "%s" + temp_gcs_location = google_storage_bucket.temp.url + parameters = { + inputFile = "%s" + output = "${google_storage_bucket.temp.url}/output" + } + on_delete = "cancel" +} +`, bucket, job, testDataflowJobTemplateWordCountUrl, testDataflowJobSampleFileUrl) +} + +func testAccDataflowJob_resourceLabelsOverridesProviderDefaultLabels(bucket, job string) string { + return fmt.Sprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + } +} + +resource "google_storage_bucket" "temp" { + name = "%s" + location = "US" + force_destroy = true +} + +resource "google_dataflow_job" "with_labels" { + name = "%s" + + labels = { + env = "foo" + default_expiration_ms = 3600000 + default_key1 = "value1" + } + template_gcs_path = "%s" + temp_gcs_location = google_storage_bucket.temp.url + parameters = { + inputFile = "%s" + output = "${google_storage_bucket.temp.url}/output" + } + on_delete = "cancel" +} +`, bucket, job, testDataflowJobTemplateWordCountUrl, testDataflowJobSampleFileUrl) +} + +func testAccDataflowJob_moveResourceLabelToProviderDefaultLabels(bucket, job string) string { + return fmt.Sprintf(` +provider "google" { + default_labels = { + default_key1 = "default_value1" + env = "foo" + } +} + +resource "google_storage_bucket" "temp" { + name = "%s" + location = "US" + force_destroy = true +} + +resource "google_dataflow_job" "with_labels" { + name = "%s" + + labels = { + default_expiration_ms = 3600000 + default_key1 = "value1" + } + + template_gcs_path = "%s" + temp_gcs_location = google_storage_bucket.temp.url + parameters = { + inputFile = "%s" + output = "${google_storage_bucket.temp.url}/output" + } + on_delete = "cancel" +} +`, bucket, job, testDataflowJobTemplateWordCountUrl, testDataflowJobSampleFileUrl) } func testAccDataflowJob_kms(key_ring, crypto_key, bucket, job, zone string) string { diff --git a/google/services/datafusion/resource_data_fusion_instance.go b/google/services/datafusion/resource_data_fusion_instance.go index 35125a77ceb..9b2e79da081 100644 --- a/google/services/datafusion/resource_data_fusion_instance.go +++ b/google/services/datafusion/resource_data_fusion_instance.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -70,6 +71,12 @@ func ResourceDataFusionInstance() *schema.Resource { Delete: schema.DefaultTimeout(50 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + tpgresource.DefaultProviderRegion, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -192,7 +199,11 @@ Users will need to either manually update their state file to include these diff Type: schema.TypeMap, Optional: true, Description: `The resource labels for instance to use to annotate any related underlying resources, -such as Compute Engine VMs.`, +such as Compute Engine VMs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "network_config": { @@ -268,6 +279,12 @@ able to access the public internet.`, Computed: true, Description: `The time the instance was created in RFC3339 UTC "Zulu" format, accurate to nanoseconds.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "gcs_bucket": { Type: schema.TypeString, Computed: true, @@ -304,6 +321,13 @@ able to access the public internet.`, Computed: true, Description: `The name of the tenant project.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -364,12 +388,6 @@ func resourceDataFusionInstanceCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("enable_rbac"); !tpgresource.IsEmptyValue(reflect.ValueOf(enableRbacProp)) && (ok || !reflect.DeepEqual(v, enableRbacProp)) { obj["enableRbac"] = enableRbacProp } - labelsProp, err := expandDataFusionInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } optionsProp, err := expandDataFusionInstanceOptions(d.Get("options"), d, config) if err != nil { return err @@ -430,6 +448,12 @@ func resourceDataFusionInstanceCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("accelerators"); !tpgresource.IsEmptyValue(reflect.ValueOf(acceleratorsProp)) && (ok || !reflect.DeepEqual(v, acceleratorsProp)) { obj["accelerators"] = acceleratorsProp } + labelsProp, err := expandDataFusionInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataFusionBasePath}}projects/{{project}}/locations/{{region}}/instances?instanceId={{name}}") if err != nil { @@ -625,6 +649,12 @@ func resourceDataFusionInstanceRead(d *schema.ResourceData, meta interface{}) er if err := d.Set("accelerators", flattenDataFusionInstanceAccelerators(res["accelerators"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenDataFusionInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenDataFusionInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -663,12 +693,6 @@ func resourceDataFusionInstanceUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("enable_rbac"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, enableRbacProp)) { obj["enableRbac"] = enableRbacProp } - labelsProp, err := expandDataFusionInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } versionProp, err := expandDataFusionInstanceVersion(d.Get("version"), d, config) if err != nil { return err @@ -687,6 +711,12 @@ func resourceDataFusionInstanceUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("accelerators"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, acceleratorsProp)) { obj["accelerators"] = acceleratorsProp } + labelsProp, err := expandDataFusionInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataFusionBasePath}}projects/{{project}}/locations/{{region}}/instances/{{name}}") if err != nil { @@ -804,10 +834,10 @@ func resourceDataFusionInstanceDelete(d *schema.ResourceData, meta interface{}) func resourceDataFusionInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -850,7 +880,18 @@ func flattenDataFusionInstanceEnableRbac(v interface{}, d *schema.ResourceData, } func flattenDataFusionInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDataFusionInstanceOptions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1003,6 +1044,25 @@ func flattenDataFusionInstanceAcceleratorsState(v interface{}, d *schema.Resourc return v } +func flattenDataFusionInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenDataFusionInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandDataFusionInstanceName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{region}}/instances/{{name}}") } @@ -1027,17 +1087,6 @@ func expandDataFusionInstanceEnableRbac(v interface{}, d tpgresource.TerraformRe return v, nil } -func expandDataFusionInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandDataFusionInstanceOptions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil @@ -1196,3 +1245,14 @@ func expandDataFusionInstanceAcceleratorsAcceleratorType(v interface{}, d tpgres func expandDataFusionInstanceAcceleratorsState(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDataFusionInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/datafusion/resource_data_fusion_instance_generated_test.go b/google/services/datafusion/resource_data_fusion_instance_generated_test.go index a29dcf48126..5f59752e78f 100644 --- a/google/services/datafusion/resource_data_fusion_instance_generated_test.go +++ b/google/services/datafusion/resource_data_fusion_instance_generated_test.go @@ -50,7 +50,7 @@ func TestAccDataFusionInstance_dataFusionInstanceBasicExample(t *testing.T) { ResourceName: "google_data_fusion_instance.basic_instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, + ImportStateVerifyIgnore: []string{"region", "labels", "terraform_labels"}, }, }, }) @@ -87,7 +87,7 @@ func TestAccDataFusionInstance_dataFusionInstanceFullExample(t *testing.T) { ResourceName: "google_data_fusion_instance.extended_instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, + ImportStateVerifyIgnore: []string{"region", "labels", "terraform_labels"}, }, }, }) @@ -158,7 +158,7 @@ func TestAccDataFusionInstance_dataFusionInstanceCmekExample(t *testing.T) { ResourceName: "google_data_fusion_instance.cmek", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, + ImportStateVerifyIgnore: []string{"region", "labels", "terraform_labels"}, }, }, }) @@ -221,7 +221,7 @@ func TestAccDataFusionInstance_dataFusionInstanceEnterpriseExample(t *testing.T) ResourceName: "google_data_fusion_instance.enterprise_instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, + ImportStateVerifyIgnore: []string{"region", "labels", "terraform_labels"}, }, }, }) @@ -258,7 +258,7 @@ func TestAccDataFusionInstance_dataFusionInstanceEventExample(t *testing.T) { ResourceName: "google_data_fusion_instance.event", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, + ImportStateVerifyIgnore: []string{"region", "labels", "terraform_labels"}, }, }, }) @@ -302,7 +302,7 @@ func TestAccDataFusionInstance_dataFusionInstanceZoneExample(t *testing.T) { ResourceName: "google_data_fusion_instance.zone", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region"}, + ImportStateVerifyIgnore: []string{"region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/datafusion/resource_data_fusion_instance_test.go b/google/services/datafusion/resource_data_fusion_instance_test.go index 338c64e7d12..b2517a8fa05 100644 --- a/google/services/datafusion/resource_data_fusion_instance_test.go +++ b/google/services/datafusion/resource_data_fusion_instance_test.go @@ -23,17 +23,19 @@ func TestAccDataFusionInstance_update(t *testing.T) { Config: testAccDataFusionInstance_basic(instanceName), }, { - ResourceName: "google_data_fusion_instance.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_data_fusion_instance.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDataFusionInstance_updated(instanceName), }, { - ResourceName: "google_data_fusion_instance.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_data_fusion_instance.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -99,17 +101,19 @@ func TestAccDataFusionInstanceEnterprise_update(t *testing.T) { Config: testAccDataFusionInstanceEnterprise_basic(instanceName), }, { - ResourceName: "google_data_fusion_instance.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_data_fusion_instance.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDataFusionInstanceEnterprise_updated(instanceName), }, { - ResourceName: "google_data_fusion_instance.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_data_fusion_instance.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/datapipeline/resource_data_pipeline_pipeline.go b/google/services/datapipeline/resource_data_pipeline_pipeline.go index 5988444c2f9..12ead18b1a3 100644 --- a/google/services/datapipeline/resource_data_pipeline_pipeline.go +++ b/google/services/datapipeline/resource_data_pipeline_pipeline.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceDataPipelinePipeline() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -825,10 +830,10 @@ func resourceDataPipelinePipelineDelete(d *schema.ResourceData, meta interface{} func resourceDataPipelinePipelineImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/pipelines/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/pipelines/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dataplex/resource_dataplex_asset.go b/google/services/dataplex/resource_dataplex_asset.go index 18eac95df41..3bdf96aecfb 100644 --- a/google/services/dataplex/resource_dataplex_asset.go +++ b/google/services/dataplex/resource_dataplex_asset.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceDataplexAsset() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "dataplex_zone": { @@ -107,11 +112,10 @@ func ResourceDataplexAsset() *schema.Resource { Description: "Optional. User friendly display name.", }, - "labels": { + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: "Optional. User defined labels for the asset.", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "project": { @@ -136,6 +140,13 @@ func ResourceDataplexAsset() *schema.Resource { Elem: DataplexAssetDiscoveryStatusSchema(), }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. User defined labels for the asset.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "resource_status": { Type: schema.TypeList, Computed: true, @@ -156,6 +167,12 @@ func ResourceDataplexAsset() *schema.Resource { Description: "Output only. Current state of the asset. Possible values: STATE_UNSPECIFIED, ACTIVE, CREATING, DELETING, ACTION_REQUIRED", }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "uid": { Type: schema.TypeString, Computed: true, @@ -433,7 +450,7 @@ func resourceDataplexAssetCreate(d *schema.ResourceData, meta interface{}) error ResourceSpec: expandDataplexAssetResourceSpec(d.Get("resource_spec")), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -490,7 +507,7 @@ func resourceDataplexAssetRead(d *schema.ResourceData, meta interface{}) error { ResourceSpec: expandDataplexAssetResourceSpec(d.Get("resource_spec")), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -540,8 +557,8 @@ func resourceDataplexAssetRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("display_name", res.DisplayName); err != nil { return fmt.Errorf("error setting display_name in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) @@ -552,6 +569,9 @@ func resourceDataplexAssetRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("discovery_status", flattenDataplexAssetDiscoveryStatus(res.DiscoveryStatus)); err != nil { return fmt.Errorf("error setting discovery_status in state: %s", err) } + if err = d.Set("labels", flattenDataplexAssetLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("resource_status", flattenDataplexAssetResourceStatus(res.ResourceStatus)); err != nil { return fmt.Errorf("error setting resource_status in state: %s", err) } @@ -561,6 +581,9 @@ func resourceDataplexAssetRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("state", res.State); err != nil { return fmt.Errorf("error setting state in state: %s", err) } + if err = d.Set("terraform_labels", flattenDataplexAssetTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("uid", res.Uid); err != nil { return fmt.Errorf("error setting uid in state: %s", err) } @@ -586,7 +609,7 @@ func resourceDataplexAssetUpdate(d *schema.ResourceData, meta interface{}) error ResourceSpec: expandDataplexAssetResourceSpec(d.Get("resource_spec")), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } directive := tpgdclresource.UpdateDirective @@ -638,7 +661,7 @@ func resourceDataplexAssetDelete(d *schema.ResourceData, meta interface{}) error ResourceSpec: expandDataplexAssetResourceSpec(d.Get("resource_spec")), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -873,3 +896,33 @@ func flattenDataplexAssetSecurityStatus(obj *dataplex.AssetSecurityStatus) inter return []interface{}{transformed} } + +func flattenDataplexAssetLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenDataplexAssetTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/dataplex/resource_dataplex_asset_generated_test.go b/google/services/dataplex/resource_dataplex_asset_generated_test.go index 01d3ae3ece5..dadd822a476 100644 --- a/google/services/dataplex/resource_dataplex_asset_generated_test.go +++ b/google/services/dataplex/resource_dataplex_asset_generated_test.go @@ -54,7 +54,7 @@ func TestAccDataplexAsset_BasicAssetHandWritten(t *testing.T) { ResourceName: "google_dataplex_asset.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"resource_spec.0.name"}, + ImportStateVerifyIgnore: []string{"resource_spec.0.name", "labels", "terraform_labels"}, }, { Config: testAccDataplexAsset_BasicAssetHandWrittenUpdate0(context), @@ -63,7 +63,7 @@ func TestAccDataplexAsset_BasicAssetHandWritten(t *testing.T) { ResourceName: "google_dataplex_asset.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"resource_spec.0.name"}, + ImportStateVerifyIgnore: []string{"resource_spec.0.name", "labels", "terraform_labels"}, }, }, }) @@ -125,6 +125,12 @@ resource "google_dataplex_asset" "primary" { name = "projects/%{project_name}/buckets/tf-test-bucket%{random_suffix}" type = "STORAGE_BUCKET" } + + labels = { + env = "foo" + my-asset = "exists" + } + project = "%{project_name}" depends_on = [ @@ -190,6 +196,12 @@ resource "google_dataplex_asset" "primary" { name = "projects/%{project_name}/buckets/tf-test-bucket%{random_suffix}" type = "STORAGE_BUCKET" } + + labels = { + env = "foo" + my-asset = "exists" + } + project = "%{project_name}" depends_on = [ diff --git a/google/services/dataplex/resource_dataplex_datascan.go b/google/services/dataplex/resource_dataplex_datascan.go index a94a287f7e1..fd8aef68cdf 100644 --- a/google/services/dataplex/resource_dataplex_datascan.go +++ b/google/services/dataplex/resource_dataplex_datascan.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceDataplexDatascan() *schema.Resource { Delete: schema.DefaultTimeout(5 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "data": { Type: schema.TypeList, @@ -492,518 +498,25 @@ Sampling is not applied if 'sampling_percent' is not specified, 0 or 100.`, Description: `User friendly display name.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `User-defined labels for the scan. A list of key->value pairs.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels for the scan. A list of key->value pairs. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "create_time": { Type: schema.TypeString, Computed: true, Description: `The time when the scan was created.`, }, - "data_profile_result": { - Type: schema.TypeList, - Computed: true, - Deprecated: "`data_profile_result` is deprecated and will be removed in a future major release.", - Description: `The result of the data profile scan.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "row_count": { - Type: schema.TypeString, - Optional: true, - Description: `The count of rows scanned.`, - }, - "profile": { - Type: schema.TypeList, - Computed: true, - Description: `The profile information per field.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "fields": { - Type: schema.TypeList, - Optional: true, - Description: `List of fields with structural and profile information for each field.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "mode": { - Type: schema.TypeString, - Optional: true, - Description: `The mode of the field. Possible values include: -1. REQUIRED, if it is a required field. -2. NULLABLE, if it is an optional field. -3. REPEATED, if it is a repeated field.`, - }, - "name": { - Type: schema.TypeString, - Optional: true, - Description: `The name of the field.`, - }, - "profile": { - Type: schema.TypeList, - Optional: true, - Description: `Profile information for the corresponding field.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "distinct_ratio": { - Type: schema.TypeInt, - Optional: true, - Description: `Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.`, - }, - "top_n_values": { - Type: schema.TypeList, - Optional: true, - Description: `The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "count": { - Type: schema.TypeString, - Optional: true, - Description: `Count of the corresponding value in the scanned data.`, - }, - "value": { - Type: schema.TypeString, - Optional: true, - Description: `String value of a top N non-null value.`, - }, - }, - }, - }, - "double_profile": { - Type: schema.TypeList, - Computed: true, - Description: `Double type field information.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "average": { - Type: schema.TypeInt, - Optional: true, - Description: `Average of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - "max": { - Type: schema.TypeString, - Optional: true, - Description: `Maximum of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - "min": { - Type: schema.TypeString, - Optional: true, - Description: `Minimum of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - "quartiles": { - Type: schema.TypeString, - Optional: true, - Description: `A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.`, - }, - "standard_deviation": { - Type: schema.TypeInt, - Optional: true, - Description: `Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - }, - }, - }, - "integer_profile": { - Type: schema.TypeList, - Computed: true, - Description: `Integer type field information.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "average": { - Type: schema.TypeInt, - Optional: true, - Description: `Average of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - "max": { - Type: schema.TypeString, - Optional: true, - Description: `Maximum of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - "min": { - Type: schema.TypeString, - Optional: true, - Description: `Minimum of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - "quartiles": { - Type: schema.TypeString, - Optional: true, - Description: `A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.`, - }, - "standard_deviation": { - Type: schema.TypeInt, - Optional: true, - Description: `Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.`, - }, - }, - }, - }, - "null_ratio": { - Type: schema.TypeInt, - Computed: true, - Description: `Ratio of rows with null value against total scanned rows.`, - }, - "string_profile": { - Type: schema.TypeList, - Computed: true, - Description: `String type field information.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "average_length": { - Type: schema.TypeInt, - Optional: true, - Description: `Average length of non-null values in the scanned data.`, - }, - "max_length": { - Type: schema.TypeString, - Optional: true, - Description: `Maximum length of non-null values in the scanned data.`, - }, - "min_length": { - Type: schema.TypeString, - Optional: true, - Description: `Minimum length of non-null values in the scanned data.`, - }, - }, - }, - }, - }, - }, - }, - "type": { - Type: schema.TypeString, - Optional: true, - Description: `The field data type.`, - }, - }, - }, - }, - }, - }, - }, - "scanned_data": { - Type: schema.TypeList, - Computed: true, - Description: `The data scanned for this result.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "incremental_field": { - Type: schema.TypeList, - Optional: true, - Description: `The range denoted by values of an incremental field`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "end": { - Type: schema.TypeString, - Optional: true, - Description: `Value that marks the end of the range.`, - }, - "field": { - Type: schema.TypeString, - Optional: true, - Description: `The field that contains values which monotonically increases over time (e.g. a timestamp column).`, - }, - "start": { - Type: schema.TypeString, - Optional: true, - Description: `Value that marks the start of the range.`, - }, - }, - }, - }, - }, - }, - }, - }, - }, - }, - "data_quality_result": { - Type: schema.TypeList, + "effective_labels": { + Type: schema.TypeMap, Computed: true, - Deprecated: "`data_quality_result` is deprecated and will be removed in a future major release.", - Description: `The result of the data quality scan.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "dimensions": { - Type: schema.TypeList, - Optional: true, - Description: `A list of results at the dimension level.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "passed": { - Type: schema.TypeBool, - Optional: true, - Description: `Whether the dimension passed or failed.`, - }, - }, - }, - }, - "passed": { - Type: schema.TypeBool, - Computed: true, - Description: `Overall data quality result -- true if all rules passed.`, - }, - "row_count": { - Type: schema.TypeString, - Computed: true, - Description: `The count of rows processed.`, - }, - "rules": { - Type: schema.TypeList, - Computed: true, - Description: `A list of all the rules in a job, and their results.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "evaluated_count": { - Type: schema.TypeString, - Computed: true, - Description: `The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules. -Evaluated count can be configured to either -1. include all rows (default) - with null rows automatically failing rule evaluation, or -2. exclude null rows from the evaluatedCount, by setting ignore_nulls = true.`, - }, - "failing_rows_query": { - Type: schema.TypeString, - Computed: true, - Description: `The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules.`, - }, - "null_count": { - Type: schema.TypeString, - Computed: true, - Description: `The number of rows with null values in the specified column.`, - }, - "pass_ratio": { - Type: schema.TypeInt, - Computed: true, - Description: `The ratio of passedCount / evaluatedCount. This field is only valid for ColumnMap type rules.`, - }, - "passed": { - Type: schema.TypeBool, - Computed: true, - Description: `Whether the rule passed or failed.`, - }, - "passed_count": { - Type: schema.TypeString, - Computed: true, - Description: `The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules.`, - }, - "rule": { - Type: schema.TypeList, - Computed: true, - Description: `The rule specified in the DataQualitySpec, as is.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "column": { - Type: schema.TypeString, - Optional: true, - Description: `The unnested column which this rule is evaluated against.`, - }, - "dimension": { - Type: schema.TypeString, - Optional: true, - Description: `The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are ["COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"]`, - }, - "ignore_null": { - Type: schema.TypeBool, - Optional: true, - Description: `Rows with null values will automatically fail a rule, unless ignoreNull is true. In that case, such null rows are trivially considered passing. Only applicable to ColumnMap rules.`, - }, - "threshold": { - Type: schema.TypeInt, - Optional: true, - Description: `The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of [0.0, 1.0]. 0 indicates default value (i.e. 1.0).`, - }, - "non_null_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `ColumnMap rule which evaluates whether each column value is null.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{}, - }, - }, - "range_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `ColumnMap rule which evaluates whether each column value lies between a specified range.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "max_value": { - Type: schema.TypeString, - Optional: true, - Description: `The maximum column value allowed for a row to pass this validation. At least one of minValue and maxValue need to be provided.`, - }, - "min_value": { - Type: schema.TypeString, - Optional: true, - Description: `The minimum column value allowed for a row to pass this validation. At least one of minValue and maxValue need to be provided.`, - }, - "strict_max_enabled": { - Type: schema.TypeBool, - Optional: true, - Description: `Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed. -Only relevant if a maxValue has been defined. Default = false.`, - Default: false, - }, - "strict_min_enabled": { - Type: schema.TypeBool, - Optional: true, - Description: `Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed. -Only relevant if a minValue has been defined. Default = false.`, - Default: false, - }, - }, - }, - }, - "regex_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `ColumnMap rule which evaluates whether each column value matches a specified regex.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "regex": { - Type: schema.TypeString, - Optional: true, - Description: `A regular expression the column value is expected to match.`, - }, - }, - }, - }, - "row_condition_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `Table rule which evaluates whether each row passes the specified condition.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "sql_expression": { - Type: schema.TypeString, - Optional: true, - Description: `The SQL expression.`, - }, - }, - }, - }, - "set_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `ColumnMap rule which evaluates whether each column value is contained by a specified set.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "values": { - Type: schema.TypeList, - Optional: true, - Description: `Expected values for the column value.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - }, - }, - }, - "statistic_range_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "max_value": { - Type: schema.TypeString, - Optional: true, - Description: `The maximum column statistic value allowed for a row to pass this validation. -At least one of minValue and maxValue need to be provided.`, - }, - "min_value": { - Type: schema.TypeString, - Optional: true, - Description: `The minimum column statistic value allowed for a row to pass this validation. -At least one of minValue and maxValue need to be provided.`, - }, - "statistic": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidateEnum([]string{"STATISTIC_UNDEFINED", "MEAN", "MIN", "MAX", ""}), - Description: `column statistics. Possible values: ["STATISTIC_UNDEFINED", "MEAN", "MIN", "MAX"]`, - }, - "strict_max_enabled": { - Type: schema.TypeBool, - Optional: true, - Description: `Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed. -Only relevant if a maxValue has been defined. Default = false.`, - }, - "strict_min_enabled": { - Type: schema.TypeBool, - Optional: true, - Description: `Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed. -Only relevant if a minValue has been defined. Default = false.`, - }, - }, - }, - }, - "table_condition_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `Table rule which evaluates whether the provided expression is true.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "sql_expression": { - Type: schema.TypeString, - Optional: true, - Description: `The SQL expression.`, - }, - }, - }, - }, - "uniqueness_expectation": { - Type: schema.TypeList, - Computed: true, - Description: `ColumnAggregate rule which evaluates whether the column has duplicates.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{}, - }, - }, - }, - }, - }, - }, - }, - }, - "scanned_data": { - Type: schema.TypeList, - Computed: true, - Description: `The data scanned for this result.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "incremental_field": { - Type: schema.TypeList, - Optional: true, - Description: `The range denoted by values of an incremental field`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "end": { - Type: schema.TypeString, - Optional: true, - Description: `Value that marks the end of the range.`, - }, - "field": { - Type: schema.TypeString, - Optional: true, - Description: `The field that contains values which monotonically increases over time (e.g. a timestamp column).`, - }, - "start": { - Type: schema.TypeString, - Optional: true, - Description: `Value that marks the start of the range.`, - }, - }, - }, - }, - }, - }, - }, - }, - }, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "execution_status": { Type: schema.TypeList, @@ -1034,6 +547,13 @@ Only relevant if a minValue has been defined. Default = false.`, Computed: true, Description: `Current state of the DataScan.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "type": { Type: schema.TypeString, Computed: true, @@ -1080,12 +600,6 @@ func resourceDataplexDatascanCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandDataplexDatascanLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } dataProp, err := expandDataplexDatascanData(d.Get("data"), d, config) if err != nil { return err @@ -1110,6 +624,12 @@ func resourceDataplexDatascanCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("data_profile_spec"); ok || !reflect.DeepEqual(v, dataProfileSpecProp) { obj["dataProfileSpec"] = dataProfileSpecProp } + labelsProp, err := expandDataplexDatascanEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataplexBasePath}}projects/{{project}}/locations/{{location}}/dataScans?dataScanId={{data_scan_id}}") if err != nil { @@ -1247,10 +767,10 @@ func resourceDataplexDatascanRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("data_profile_spec", flattenDataplexDatascanDataProfileSpec(res["dataProfileSpec"], d, config)); err != nil { return fmt.Errorf("Error reading Datascan: %s", err) } - if err := d.Set("data_quality_result", flattenDataplexDatascanDataQualityResult(res["dataQualityResult"], d, config)); err != nil { + if err := d.Set("terraform_labels", flattenDataplexDatascanTerraformLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Datascan: %s", err) } - if err := d.Set("data_profile_result", flattenDataplexDatascanDataProfileResult(res["dataProfileResult"], d, config)); err != nil { + if err := d.Set("effective_labels", flattenDataplexDatascanEffectiveLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Datascan: %s", err) } @@ -1285,12 +805,6 @@ func resourceDataplexDatascanUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandDataplexDatascanLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } executionSpecProp, err := expandDataplexDatascanExecutionSpec(d.Get("execution_spec"), d, config) if err != nil { return err @@ -1309,6 +823,12 @@ func resourceDataplexDatascanUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("data_profile_spec"); ok || !reflect.DeepEqual(v, dataProfileSpecProp) { obj["dataProfileSpec"] = dataProfileSpecProp } + labelsProp, err := expandDataplexDatascanEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataplexBasePath}}projects/{{project}}/locations/{{location}}/dataScans/{{data_scan_id}}") if err != nil { @@ -1326,10 +846,6 @@ func resourceDataplexDatascanUpdate(d *schema.ResourceData, meta interface{}) er updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("execution_spec") { updateMask = append(updateMask, "executionSpec") } @@ -1341,6 +857,10 @@ func resourceDataplexDatascanUpdate(d *schema.ResourceData, meta interface{}) er if d.HasChange("data_profile_spec") { updateMask = append(updateMask, "dataProfileSpec") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -1436,10 +956,10 @@ func resourceDataplexDatascanDelete(d *schema.ResourceData, meta interface{}) er func resourceDataplexDatascanImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/dataScans/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/dataScans/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1471,7 +991,18 @@ func flattenDataplexDatascanDisplayName(v interface{}, d *schema.ResourceData, c } func flattenDataplexDatascanLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDataplexDatascanState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1957,389 +1488,25 @@ func flattenDataplexDatascanDataProfileSpecExcludeFieldsFieldNames(v interface{} return v } -func flattenDataplexDatascanDataQualityResult(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["passed"] = - flattenDataplexDatascanDataQualityResultPassed(original["passed"], d, config) - transformed["dimensions"] = - flattenDataplexDatascanDataQualityResultDimensions(original["dimensions"], d, config) - transformed["rules"] = - flattenDataplexDatascanDataQualityResultRules(original["rules"], d, config) - transformed["row_count"] = - flattenDataplexDatascanDataQualityResultRowCount(original["rowCount"], d, config) - transformed["scanned_data"] = - flattenDataplexDatascanDataQualityResultScannedData(original["scannedData"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultPassed(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultDimensions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "passed": flattenDataplexDatascanDataQualityResultDimensionsPassed(original["passed"], d, config), - }) - } - return transformed -} -func flattenDataplexDatascanDataQualityResultDimensionsPassed(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRules(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenDataplexDatascanTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return v } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "rule": flattenDataplexDatascanDataQualityResultRulesRule(original["rule"], d, config), - "passed": flattenDataplexDatascanDataQualityResultRulesPassed(original["passed"], d, config), - "evaluated_count": flattenDataplexDatascanDataQualityResultRulesEvaluatedCount(original["evaluatedCount"], d, config), - "passed_count": flattenDataplexDatascanDataQualityResultRulesPassedCount(original["passedCount"], d, config), - "null_count": flattenDataplexDatascanDataQualityResultRulesNullCount(original["nullCount"], d, config), - "pass_ratio": flattenDataplexDatascanDataQualityResultRulesPassRatio(original["passRatio"], d, config), - "failing_rows_query": flattenDataplexDatascanDataQualityResultRulesFailingRowsQuery(original["failingRowsQuery"], d, config), - }) - } - return transformed -} -func flattenDataplexDatascanDataQualityResultRulesRule(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["column"] = - flattenDataplexDatascanDataQualityResultRulesRuleColumn(original["column"], d, config) - transformed["ignore_null"] = - flattenDataplexDatascanDataQualityResultRulesRuleIgnoreNull(original["ignoreNull"], d, config) - transformed["dimension"] = - flattenDataplexDatascanDataQualityResultRulesRuleDimension(original["dimension"], d, config) - transformed["threshold"] = - flattenDataplexDatascanDataQualityResultRulesRuleThreshold(original["threshold"], d, config) - transformed["range_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectation(original["rangeExpectation"], d, config) - transformed["non_null_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleNonNullExpectation(original["nonNullExpectation"], d, config) - transformed["set_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleSetExpectation(original["setExpectation"], d, config) - transformed["regex_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleRegexExpectation(original["regexExpectation"], d, config) - transformed["uniqueness_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleUniquenessExpectation(original["uniquenessExpectation"], d, config) - transformed["statistic_range_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectation(original["statisticRangeExpectation"], d, config) - transformed["row_condition_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleRowConditionExpectation(original["rowConditionExpectation"], d, config) - transformed["table_condition_expectation"] = - flattenDataplexDatascanDataQualityResultRulesRuleTableConditionExpectation(original["tableConditionExpectation"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleColumn(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleIgnoreNull(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleDimension(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleThreshold(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal - } - } - - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["min_value"] = - flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationMinValue(original["minValue"], d, config) - transformed["max_value"] = - flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationMaxValue(original["maxValue"], d, config) - transformed["strict_min_enabled"] = - flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationStrictMinEnabled(original["strictMinEnabled"], d, config) - transformed["strict_max_enabled"] = - flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationStrictMaxEnabled(original["strictMaxEnabled"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationMinValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationMaxValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationStrictMinEnabled(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleRangeExpectationStrictMaxEnabled(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleNonNullExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - transformed := make(map[string]interface{}) - return []interface{}{transformed} -} - -func flattenDataplexDatascanDataQualityResultRulesRuleSetExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["values"] = - flattenDataplexDatascanDataQualityResultRulesRuleSetExpectationValues(original["values"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleSetExpectationValues(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleRegexExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["regex"] = - flattenDataplexDatascanDataQualityResultRulesRuleRegexExpectationRegex(original["regex"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleRegexExpectationRegex(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleUniquenessExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - transformed := make(map[string]interface{}) - return []interface{}{transformed} -} - -func flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["statistic"] = - flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationStatistic(original["statistic"], d, config) - transformed["min_value"] = - flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationMinValue(original["minValue"], d, config) - transformed["max_value"] = - flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationMaxValue(original["maxValue"], d, config) - transformed["strict_min_enabled"] = - flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationStrictMinEnabled(original["strictMinEnabled"], d, config) - transformed["strict_max_enabled"] = - flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationStrictMaxEnabled(original["strictMaxEnabled"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationStatistic(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationMinValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationMaxValue(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationStrictMinEnabled(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleStatisticRangeExpectationStrictMaxEnabled(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesRuleRowConditionExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["sql_expression"] = - flattenDataplexDatascanDataQualityResultRulesRuleRowConditionExpectationSqlExpression(original["sqlExpression"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleRowConditionExpectationSqlExpression(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} -func flattenDataplexDatascanDataQualityResultRulesRuleTableConditionExpectation(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } transformed := make(map[string]interface{}) - transformed["sql_expression"] = - flattenDataplexDatascanDataQualityResultRulesRuleTableConditionExpectationSqlExpression(original["sqlExpression"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultRulesRuleTableConditionExpectationSqlExpression(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesPassed(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesEvaluatedCount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesPassedCount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesNullCount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRulesPassRatio(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // Handles the string fixed64 format - if strVal, ok := v.(string); ok { - if intVal, err := tpgresource.StringToFixed64(strVal); err == nil { - return intVal + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] } } - // number values are represented as float64 - if floatVal, ok := v.(float64); ok { - intVal := int(floatVal) - return intVal - } - - return v // let terraform core handle it otherwise -} - -func flattenDataplexDatascanDataQualityResultRulesFailingRowsQuery(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultRowCount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultScannedData(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["incremental_field"] = - flattenDataplexDatascanDataQualityResultScannedDataIncrementalField(original["incrementalField"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultScannedDataIncrementalField(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["field"] = - flattenDataplexDatascanDataQualityResultScannedDataIncrementalFieldField(original["field"], d, config) - transformed["start"] = - flattenDataplexDatascanDataQualityResultScannedDataIncrementalFieldStart(original["start"], d, config) - transformed["end"] = - flattenDataplexDatascanDataQualityResultScannedDataIncrementalFieldEnd(original["end"], d, config) - return []interface{}{transformed} -} -func flattenDataplexDatascanDataQualityResultScannedDataIncrementalFieldField(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenDataplexDatascanDataQualityResultScannedDataIncrementalFieldStart(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + return transformed } -func flattenDataplexDatascanDataQualityResultScannedDataIncrementalFieldEnd(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { +func flattenDataplexDatascanEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } -func flattenDataplexDatascanDataProfileResult(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - // We want to ignore read on this field, but cannot because it is nested - return nil -} - func expandDataplexDatascanDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -2348,17 +1515,6 @@ func expandDataplexDatascanDisplayName(v interface{}, d tpgresource.TerraformRes return v, nil } -func expandDataplexDatascanLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandDataplexDatascanData(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -3106,3 +2262,14 @@ func expandDataplexDatascanDataProfileSpecExcludeFields(v interface{}, d tpgreso func expandDataplexDatascanDataProfileSpecExcludeFieldsFieldNames(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDataplexDatascanEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/dataplex/resource_dataplex_datascan_generated_test.go b/google/services/dataplex/resource_dataplex_datascan_generated_test.go index 32ba8541f6d..aadcd0f31aa 100644 --- a/google/services/dataplex/resource_dataplex_datascan_generated_test.go +++ b/google/services/dataplex/resource_dataplex_datascan_generated_test.go @@ -51,7 +51,7 @@ func TestAccDataplexDatascan_dataplexDatascanBasicProfileExample(t *testing.T) { ResourceName: "google_dataplex_datascan.basic_profile", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "data_scan_id"}, + ImportStateVerifyIgnore: []string{"location", "data_scan_id", "labels", "terraform_labels"}, }, }, }) @@ -100,7 +100,7 @@ func TestAccDataplexDatascan_dataplexDatascanFullProfileExample(t *testing.T) { ResourceName: "google_dataplex_datascan.full_profile", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "data_scan_id"}, + ImportStateVerifyIgnore: []string{"location", "data_scan_id", "labels", "terraform_labels"}, }, }, }) @@ -182,7 +182,7 @@ func TestAccDataplexDatascan_dataplexDatascanBasicQualityExample(t *testing.T) { ResourceName: "google_dataplex_datascan.basic_quality", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "data_scan_id"}, + ImportStateVerifyIgnore: []string{"location", "data_scan_id", "labels", "terraform_labels"}, }, }, }) @@ -240,7 +240,7 @@ func TestAccDataplexDatascan_dataplexDatascanFullQualityExample(t *testing.T) { ResourceName: "google_dataplex_datascan.full_quality", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "data_scan_id"}, + ImportStateVerifyIgnore: []string{"location", "data_scan_id", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/dataplex/resource_dataplex_lake.go b/google/services/dataplex/resource_dataplex_lake.go index 66d7e14bae0..082b3f24cf0 100644 --- a/google/services/dataplex/resource_dataplex_lake.go +++ b/google/services/dataplex/resource_dataplex_lake.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceDataplexLake() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "location": { @@ -78,11 +83,10 @@ func ResourceDataplexLake() *schema.Resource { Description: "Optional. User friendly display name.", }, - "labels": { + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: "Optional. User-defined labels for the lake.", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "metastore": { @@ -115,6 +119,13 @@ func ResourceDataplexLake() *schema.Resource { Description: "Output only. The time when the lake was created.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. User-defined labels for the lake.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "metastore_status": { Type: schema.TypeList, Computed: true, @@ -134,6 +145,12 @@ func ResourceDataplexLake() *schema.Resource { Description: "Output only. Current state of the lake. Possible values: STATE_UNSPECIFIED, ACTIVE, CREATING, DELETING, ACTION_REQUIRED", }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "uid": { Type: schema.TypeString, Computed: true, @@ -227,7 +244,7 @@ func resourceDataplexLakeCreate(d *schema.ResourceData, meta interface{}) error Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Metastore: expandDataplexLakeMetastore(d.Get("metastore")), Project: dcl.String(project), } @@ -281,7 +298,7 @@ func resourceDataplexLakeRead(d *schema.ResourceData, meta interface{}) error { Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Metastore: expandDataplexLakeMetastore(d.Get("metastore")), Project: dcl.String(project), } @@ -320,8 +337,8 @@ func resourceDataplexLakeRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("display_name", res.DisplayName); err != nil { return fmt.Errorf("error setting display_name in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("metastore", flattenDataplexLakeMetastore(res.Metastore)); err != nil { return fmt.Errorf("error setting metastore in state: %s", err) @@ -335,6 +352,9 @@ func resourceDataplexLakeRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenDataplexLakeLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("metastore_status", flattenDataplexLakeMetastoreStatus(res.MetastoreStatus)); err != nil { return fmt.Errorf("error setting metastore_status in state: %s", err) } @@ -344,6 +364,9 @@ func resourceDataplexLakeRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("state", res.State); err != nil { return fmt.Errorf("error setting state in state: %s", err) } + if err = d.Set("terraform_labels", flattenDataplexLakeTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("uid", res.Uid); err != nil { return fmt.Errorf("error setting uid in state: %s", err) } @@ -365,7 +388,7 @@ func resourceDataplexLakeUpdate(d *schema.ResourceData, meta interface{}) error Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Metastore: expandDataplexLakeMetastore(d.Get("metastore")), Project: dcl.String(project), } @@ -414,7 +437,7 @@ func resourceDataplexLakeDelete(d *schema.ResourceData, meta interface{}) error Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Metastore: expandDataplexLakeMetastore(d.Get("metastore")), Project: dcl.String(project), } @@ -519,3 +542,33 @@ func flattenDataplexLakeMetastoreStatus(obj *dataplex.LakeMetastoreStatus) inter return []interface{}{transformed} } + +func flattenDataplexLakeLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenDataplexLakeTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/dataplex/resource_dataplex_lake_generated_test.go b/google/services/dataplex/resource_dataplex_lake_generated_test.go index 4d4eafb4641..55b8636fb55 100644 --- a/google/services/dataplex/resource_dataplex_lake_generated_test.go +++ b/google/services/dataplex/resource_dataplex_lake_generated_test.go @@ -51,17 +51,19 @@ func TestAccDataplexLake_BasicLake(t *testing.T) { Config: testAccDataplexLake_BasicLake(context), }, { - ResourceName: "google_dataplex_lake.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dataplex_lake.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDataplexLake_BasicLakeUpdate0(context), }, { - ResourceName: "google_dataplex_lake.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dataplex_lake.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -74,12 +76,11 @@ resource "google_dataplex_lake" "primary" { name = "tf-test-lake%{random_suffix}" description = "Lake for DCL" display_name = "Lake for DCL" + project = "%{project_name}" labels = { my-lake = "exists" } - - project = "%{project_name}" } @@ -93,12 +94,11 @@ resource "google_dataplex_lake" "primary" { name = "tf-test-lake%{random_suffix}" description = "Updated description for lake" display_name = "Lake for DCL" + project = "%{project_name}" labels = { my-lake = "exists" } - - project = "%{project_name}" } diff --git a/google/services/dataplex/resource_dataplex_task.go b/google/services/dataplex/resource_dataplex_task.go index c22bdfebf6b..5e7e2a09571 100644 --- a/google/services/dataplex/resource_dataplex_task.go +++ b/google/services/dataplex/resource_dataplex_task.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceDataplexTask() *schema.Resource { Delete: schema.DefaultTimeout(5 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "execution_spec": { Type: schema.TypeList, @@ -133,10 +139,14 @@ func ResourceDataplexTask() *schema.Resource { Description: `User friendly display name.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `User-defined labels for the task.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels for the task. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "lake": { Type: schema.TypeString, @@ -448,6 +458,12 @@ func ResourceDataplexTask() *schema.Resource { Computed: true, Description: `The time when the task was created.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "execution_status": { Type: schema.TypeList, Computed: true, @@ -526,6 +542,13 @@ func ResourceDataplexTask() *schema.Resource { Computed: true, Description: `Current state of the task.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -567,12 +590,6 @@ func resourceDataplexTaskCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandDataplexTaskLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } triggerSpecProp, err := expandDataplexTaskTriggerSpec(d.Get("trigger_spec"), d, config) if err != nil { return err @@ -597,6 +614,12 @@ func resourceDataplexTaskCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("notebook"); !tpgresource.IsEmptyValue(reflect.ValueOf(notebookProp)) && (ok || !reflect.DeepEqual(v, notebookProp)) { obj["notebook"] = notebookProp } + labelsProp, err := expandDataplexTaskEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataplexBasePath}}projects/{{project}}/locations/{{location}}/lakes/{{lake}}/tasks?task_id={{task_id}}") if err != nil { @@ -731,6 +754,12 @@ func resourceDataplexTaskRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("notebook", flattenDataplexTaskNotebook(res["notebook"], d, config)); err != nil { return fmt.Errorf("Error reading Task: %s", err) } + if err := d.Set("terraform_labels", flattenDataplexTaskTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Task: %s", err) + } + if err := d.Set("effective_labels", flattenDataplexTaskEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Task: %s", err) + } return nil } @@ -763,12 +792,6 @@ func resourceDataplexTaskUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandDataplexTaskLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } triggerSpecProp, err := expandDataplexTaskTriggerSpec(d.Get("trigger_spec"), d, config) if err != nil { return err @@ -793,6 +816,12 @@ func resourceDataplexTaskUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("notebook"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, notebookProp)) { obj["notebook"] = notebookProp } + labelsProp, err := expandDataplexTaskEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataplexBasePath}}projects/{{project}}/locations/{{location}}/lakes/{{lake}}/tasks/{{task_id}}") if err != nil { @@ -810,10 +839,6 @@ func resourceDataplexTaskUpdate(d *schema.ResourceData, meta interface{}) error updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("trigger_spec") { updateMask = append(updateMask, "triggerSpec") } @@ -829,6 +854,10 @@ func resourceDataplexTaskUpdate(d *schema.ResourceData, meta interface{}) error if d.HasChange("notebook") { updateMask = append(updateMask, "notebook") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -924,9 +953,9 @@ func resourceDataplexTaskDelete(d *schema.ResourceData, meta interface{}) error func resourceDataplexTaskImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/lakes/(?P[^/]+)/tasks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/lakes/(?P[^/]+)/tasks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -970,7 +999,18 @@ func flattenDataplexTaskState(v interface{}, d *schema.ResourceData, config *tra } func flattenDataplexTaskLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDataplexTaskTriggerSpec(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1511,6 +1551,25 @@ func flattenDataplexTaskNotebookArchiveUris(v interface{}, d *schema.ResourceDat return v } +func flattenDataplexTaskTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenDataplexTaskEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandDataplexTaskDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1519,17 +1578,6 @@ func expandDataplexTaskDisplayName(v interface{}, d tpgresource.TerraformResourc return v, nil } -func expandDataplexTaskLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandDataplexTaskTriggerSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -2168,3 +2216,14 @@ func expandDataplexTaskNotebookFileUris(v interface{}, d tpgresource.TerraformRe func expandDataplexTaskNotebookArchiveUris(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDataplexTaskEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/dataplex/resource_dataplex_task_generated_test.go b/google/services/dataplex/resource_dataplex_task_generated_test.go index 61005f7d1b6..5b4a32c2d78 100644 --- a/google/services/dataplex/resource_dataplex_task_generated_test.go +++ b/google/services/dataplex/resource_dataplex_task_generated_test.go @@ -51,7 +51,7 @@ func TestAccDataplexTask_dataplexTaskBasicExample(t *testing.T) { ResourceName: "google_dataplex_task.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "lake", "task_id"}, + ImportStateVerifyIgnore: []string{"location", "lake", "task_id", "labels", "terraform_labels"}, }, }, }) @@ -128,7 +128,7 @@ func TestAccDataplexTask_dataplexTaskSparkExample(t *testing.T) { ResourceName: "google_dataplex_task.example_spark", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "lake", "task_id"}, + ImportStateVerifyIgnore: []string{"location", "lake", "task_id", "labels", "terraform_labels"}, }, }, }) @@ -220,7 +220,7 @@ func TestAccDataplexTask_dataplexTaskNotebookExample(t *testing.T) { ResourceName: "google_dataplex_task.example_notebook", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location", "lake", "task_id"}, + ImportStateVerifyIgnore: []string{"location", "lake", "task_id", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/dataplex/resource_dataplex_zone.go b/google/services/dataplex/resource_dataplex_zone.go index 49e2caf9245..a9ef60f6a30 100644 --- a/google/services/dataplex/resource_dataplex_zone.go +++ b/google/services/dataplex/resource_dataplex_zone.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceDataplexZone() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "discovery_spec": { @@ -109,11 +114,10 @@ func ResourceDataplexZone() *schema.Resource { Description: "Optional. User friendly display name.", }, - "labels": { + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: "Optional. User defined labels for the zone.", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "project": { @@ -138,12 +142,25 @@ func ResourceDataplexZone() *schema.Resource { Description: "Output only. The time when the zone was created.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. User defined labels for the zone.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "state": { Type: schema.TypeString, Computed: true, Description: "Output only. Current state of the zone. Possible values: STATE_UNSPECIFIED, ACTIVE, CREATING, DELETING, ACTION_REQUIRED", }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "uid": { Type: schema.TypeString, Computed: true, @@ -311,7 +328,7 @@ func resourceDataplexZoneCreate(d *schema.ResourceData, meta interface{}) error Type: dataplex.ZoneTypeEnumRef(d.Get("type").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -368,7 +385,7 @@ func resourceDataplexZoneRead(d *schema.ResourceData, meta interface{}) error { Type: dataplex.ZoneTypeEnumRef(d.Get("type").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -418,8 +435,8 @@ func resourceDataplexZoneRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("display_name", res.DisplayName); err != nil { return fmt.Errorf("error setting display_name in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) @@ -430,9 +447,15 @@ func resourceDataplexZoneRead(d *schema.ResourceData, meta interface{}) error { if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenDataplexZoneLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("state", res.State); err != nil { return fmt.Errorf("error setting state in state: %s", err) } + if err = d.Set("terraform_labels", flattenDataplexZoneTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("uid", res.Uid); err != nil { return fmt.Errorf("error setting uid in state: %s", err) } @@ -458,7 +481,7 @@ func resourceDataplexZoneUpdate(d *schema.ResourceData, meta interface{}) error Type: dataplex.ZoneTypeEnumRef(d.Get("type").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } directive := tpgdclresource.UpdateDirective @@ -510,7 +533,7 @@ func resourceDataplexZoneDelete(d *schema.ResourceData, meta interface{}) error Type: dataplex.ZoneTypeEnumRef(d.Get("type").(string)), Description: dcl.String(d.Get("description").(string)), DisplayName: dcl.String(d.Get("display_name").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -695,3 +718,33 @@ func flattenDataplexZoneAssetStatus(obj *dataplex.ZoneAssetStatus) interface{} { return []interface{}{transformed} } + +func flattenDataplexZoneLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenDataplexZoneTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/dataplex/resource_dataplex_zone_generated_test.go b/google/services/dataplex/resource_dataplex_zone_generated_test.go index d9d326014d5..5b76dde15dd 100644 --- a/google/services/dataplex/resource_dataplex_zone_generated_test.go +++ b/google/services/dataplex/resource_dataplex_zone_generated_test.go @@ -51,17 +51,19 @@ func TestAccDataplexZone_BasicZone(t *testing.T) { Config: testAccDataplexZone_BasicZone(context), }, { - ResourceName: "google_dataplex_zone.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dataplex_zone.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDataplexZone_BasicZoneUpdate0(context), }, { - ResourceName: "google_dataplex_zone.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dataplex_zone.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -85,8 +87,8 @@ resource "google_dataplex_zone" "primary" { type = "RAW" description = "Zone for DCL" display_name = "Zone for DCL" - labels = {} project = "%{project_name}" + labels = {} } resource "google_dataplex_lake" "basic" { @@ -94,12 +96,11 @@ resource "google_dataplex_lake" "basic" { name = "tf-test-lake%{random_suffix}" description = "Lake for DCL" display_name = "Lake for DCL" + project = "%{project_name}" labels = { my-lake = "exists" } - - project = "%{project_name}" } @@ -124,12 +125,11 @@ resource "google_dataplex_zone" "primary" { type = "RAW" description = "Zone for DCL Updated" display_name = "Zone for DCL" + project = "%{project_name}" labels = { updated_label = "exists" } - - project = "%{project_name}" } resource "google_dataplex_lake" "basic" { @@ -137,12 +137,11 @@ resource "google_dataplex_lake" "basic" { name = "tf-test-lake%{random_suffix}" description = "Lake for DCL" display_name = "Lake for DCL" + project = "%{project_name}" labels = { my-lake = "exists" } - - project = "%{project_name}" } diff --git a/google/services/dataproc/resource_dataproc_autoscaling_policy.go b/google/services/dataproc/resource_dataproc_autoscaling_policy.go index d69334e8fa9..692d68d1a12 100644 --- a/google/services/dataproc/resource_dataproc_autoscaling_policy.go +++ b/google/services/dataproc/resource_dataproc_autoscaling_policy.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceDataprocAutoscalingPolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "policy_id": { Type: schema.TypeString, @@ -505,9 +510,9 @@ func resourceDataprocAutoscalingPolicyDelete(d *schema.ResourceData, meta interf func resourceDataprocAutoscalingPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/autoscalingPolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/autoscalingPolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dataproc/resource_dataproc_cluster.go b/google/services/dataproc/resource_dataproc_cluster.go index 2194cd2fd53..a864c1f3ab1 100644 --- a/google/services/dataproc/resource_dataproc_cluster.go +++ b/google/services/dataproc/resource_dataproc_cluster.go @@ -11,6 +11,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -117,7 +118,8 @@ var ( } ) -const resourceDataprocGoogleProvidedLabelPrefix = "labels.goog-dataproc" +const resourceDataprocGoogleLabelPrefix = "goog-dataproc" +const resourceDataprocGoogleProvidedLabelPrefix = "labels." + resourceDataprocGoogleLabelPrefix func resourceDataprocLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { if strings.HasPrefix(k, resourceDataprocGoogleProvidedLabelPrefix) && new == "" { @@ -163,6 +165,20 @@ func ResourceDataprocCluster() *schema.Resource { Delete: schema.DefaultTimeout(45 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), + + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceDataprocClusterResourceV0().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceDataprocClusterStateUpgradeV0, + Version: 0, + }, + }, + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -219,10 +235,24 @@ func ResourceDataprocCluster() *schema.Resource { Type: schema.TypeMap, Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, - // GCP automatically adds labels - DiffSuppressFunc: resourceDataprocLabelDiffSuppress, - Computed: true, - Description: `The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.`, + Description: `The list of the labels (key/value pairs) configured on the resource and to be applied to instances in the cluster. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, }, "virtual_cluster_config": { @@ -1325,8 +1355,8 @@ func resourceDataprocClusterCreate(d *schema.ResourceData, meta interface{}) err return err } - if _, ok := d.GetOk("labels"); ok { - cluster.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + cluster.Labels = tpgresource.ExpandEffectiveLabels(d) } // Checking here caters for the case where the user does not specify cluster_config @@ -1970,8 +2000,8 @@ func resourceDataprocClusterUpdate(d *schema.ResourceData, meta interface{}) err updMask := []string{} - if d.HasChange("labels") { - v := d.Get("labels") + if d.HasChange("effective_labels") { + v := d.Get("effective_labels") m := make(map[string]string) for k, val := range v.(map[string]interface{}) { m[k] = val.(string) @@ -2084,10 +2114,19 @@ func resourceDataprocClusterRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("region", region); err != nil { return fmt.Errorf("Error setting region: %s", err) } - if err := d.Set("labels", cluster.Labels); err != nil { + + if err := tpgresource.SetLabels(cluster.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(cluster.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + + if err := d.Set("effective_labels", cluster.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } + var cfg []map[string]interface{} cfg, err = flattenClusterConfig(d, cluster.Config) diff --git a/google/services/dataproc/resource_dataproc_cluster_migrate.go b/google/services/dataproc/resource_dataproc_cluster_migrate.go new file mode 100644 index 00000000000..ef7714efeea --- /dev/null +++ b/google/services/dataproc/resource_dataproc_cluster_migrate.go @@ -0,0 +1,983 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package dataproc + +import ( + "context" + "fmt" + "regexp" + "time" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + + "github.com/hashicorp/terraform-provider-google/google/tpgresource" +) + +func resourceDataprocClusterResourceV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the cluster, unique within the project and zone.`, + ValidateFunc: func(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if len(value) > 55 { + errors = append(errors, fmt.Errorf( + "%q cannot be longer than 55 characters", k)) + } + if !regexp.MustCompile("^[a-z0-9-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q can only contain lowercase letters, numbers and hyphens", k)) + } + if !regexp.MustCompile("^[a-z]").MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must start with a letter", k)) + } + if !regexp.MustCompile("[a-z0-9]$").MatchString(value) { + errors = append(errors, fmt.Errorf( + "%q must end with a number or a letter", k)) + } + return + }, + }, + + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the cluster will exist. If it is not provided, the provider project is used.`, + }, + + "region": { + Type: schema.TypeString, + Optional: true, + Default: "global", + ForceNew: true, + Description: `The region in which the cluster and associated nodes will be created in. Defaults to global.`, + }, + + "graceful_decommission_timeout": { + Type: schema.TypeString, + Optional: true, + Default: "0s", + Description: `The timeout duration which allows graceful decomissioning when you change the number of worker nodes directly through a terraform apply`, + }, + + "labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + // GCP automatically adds labels + DiffSuppressFunc: resourceDataprocLabelDiffSuppress, + Computed: true, + Description: `The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including goog-dataproc-cluster-name which is the name of the cluster.`, + }, + + "virtual_cluster_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The virtual cluster config is used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster. Dataproc may set default values, and values may change when clusters are updated. Exactly one of config or virtualClusterConfig must be specified.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "staging_bucket": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: virtualClusterConfigKeys, + ForceNew: true, + Description: `A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket.`, + }, + + "auxiliary_services_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + AtLeastOneOf: virtualClusterConfigKeys, + Description: `Auxiliary services configuration for a Cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "metastore_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + AtLeastOneOf: auxiliaryServicesConfigKeys, + Description: `The Hive Metastore configuration for this workload.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "dataproc_metastore_service": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + AtLeastOneOf: auxiliaryServicesMetastoreConfigKeys, + Description: `The Hive Metastore configuration for this workload.`, + }, + }, + }, + }, + + "spark_history_server_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + AtLeastOneOf: auxiliaryServicesConfigKeys, + Description: `The Spark History Server configuration for the workload.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "dataproc_cluster": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + AtLeastOneOf: auxiliaryServicesSparkHistoryServerConfigKeys, + Description: `Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.`, + }, + }, + }, + }, + }, + }, + }, + + "kubernetes_cluster_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + AtLeastOneOf: virtualClusterConfigKeys, + Description: `The configuration for running the Dataproc cluster on Kubernetes.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "kubernetes_namespace": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + AtLeastOneOf: kubernetesClusterConfigKeys, + Description: `A namespace within the Kubernetes cluster to deploy into. If this namespace does not exist, it is created. If it exists, Dataproc verifies that another Dataproc VirtualCluster is not installed into it. If not specified, the name of the Dataproc Cluster is used.`, + }, + + "kubernetes_software_config": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Description: `The software configuration for this Dataproc cluster running on Kubernetes.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "component_version": { + Type: schema.TypeMap, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The components that should be installed in this Dataproc cluster. The key must be a string from the KubernetesComponent enumeration. The value is the version of the software to be installed.`, + }, + + "properties": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + DiffSuppressFunc: resourceDataprocPropertyDiffSuppress, + Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: `The properties to set on daemon config files. Property keys are specified in prefix:property format, for example spark:spark.kubernetes.container.image.`, + }, + }, + }, + }, + + "gke_cluster_config": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Description: `The configuration for running the Dataproc cluster on GKE.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "gke_cluster_target": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + AtLeastOneOf: gkeClusterConfigKeys, + Description: `A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'`, + }, + + "node_pool_target": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: gkeClusterConfigKeys, + MinItems: 1, + Description: `GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "node_pool": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + Description: `The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{nodePool}'`, + }, + + "roles": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + Required: true, + Description: `The roles associated with the GKE node pool.`, + }, + + "node_pool_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Input only. The configuration for the GKE node pool.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The node pool configuration.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "machine_type": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Description: `The name of a Compute Engine machine type.`, + }, + + "local_ssd_count": { + Type: schema.TypeInt, + ForceNew: true, + Optional: true, + Description: `The minimum number of nodes in the node pool. Must be >= 0 and <= maxNodeCount.`, + }, + + "preemptible": { + Type: schema.TypeBool, + ForceNew: true, + Optional: true, + Description: `Whether the nodes are created as preemptible VM instances. Preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).`, + }, + + "min_cpu_platform": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Description: `Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell" or "Intel Sandy Bridge".`, + }, + + "spot": { + Type: schema.TypeBool, + ForceNew: true, + Optional: true, + Description: `Spot flag for enabling Spot VM, which is a rebrand of the existing preemptible flag.`, + }, + }, + }, + }, + + "locations": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + Required: true, + Description: `The list of Compute Engine zones where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.`, + }, + + "autoscaling": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "min_node_count": { + Type: schema.TypeInt, + ForceNew: true, + Optional: true, + Description: `The minimum number of nodes in the node pool. Must be >= 0 and <= maxNodeCount.`, + }, + + "max_node_count": { + Type: schema.TypeInt, + ForceNew: true, + Optional: true, + Description: `The maximum number of nodes in the node pool. Must be >= minNodeCount, and must be > 0.`, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + + "cluster_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `Allows you to configure various aspects of the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "staging_bucket": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + ForceNew: true, + Description: `The Cloud Storage staging bucket used to stage files, such as Hadoop jars, between client machines and the cluster. Note: If you don't explicitly specify a staging_bucket then GCP will auto create / assign one for you. However, you are not guaranteed an auto generated bucket which is solely dedicated to your cluster; it may be shared with other clusters in the same region/zone also choosing to use the auto generation option.`, + }, + // If the user does not specify a staging bucket, GCP will allocate one automatically. + // The staging_bucket field provides a way for the user to supply their own + // staging bucket. The bucket field is purely a computed field which details + // the definitive bucket allocated and in use (either the user supplied one via + // staging_bucket, or the GCP generated one) + "bucket": { + Type: schema.TypeString, + Computed: true, + Description: ` The name of the cloud storage bucket ultimately used to house the staging data for the cluster. If staging_bucket is specified, it will contain this value, otherwise it will be the auto generated name.`, + }, + + "temp_bucket": { + Type: schema.TypeString, + Optional: true, + Computed: true, + AtLeastOneOf: clusterConfigKeys, + ForceNew: true, + Description: `The Cloud Storage temp bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. Note: If you don't explicitly specify a temp_bucket then GCP will auto create / assign one for you.`, + }, + + "gce_cluster_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + Computed: true, + MaxItems: 1, + Description: `Common config settings for resources of Google Compute Engine cluster instances, applicable to all instances in the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "zone": { + Type: schema.TypeString, + Optional: true, + Computed: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + Description: `The GCP zone where your data is stored and used (i.e. where the master and the worker nodes will be created in). If region is set to 'global' (default) then zone is mandatory, otherwise GCP is able to make use of Auto Zone Placement to determine this automatically for you. Note: This setting additionally determines and restricts which computing resources are available for use with other configs such as cluster_config.master_config.machine_type and cluster_config.worker_config.machine_type.`, + }, + + "network": { + Type: schema.TypeString, + Optional: true, + Computed: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + ConflictsWith: []string{"cluster_config.0.gce_cluster_config.0.subnetwork"}, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine network to the cluster will be part of. Conflicts with subnetwork. If neither is specified, this defaults to the "default" network.`, + }, + + "subnetwork": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + ConflictsWith: []string{"cluster_config.0.gce_cluster_config.0.network"}, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name or self_link of the Google Compute Engine subnetwork the cluster will be part of. Conflicts with network.`, + }, + + "tags": { + Type: schema.TypeSet, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `The list of instance tags applied to instances in the cluster. Tags are used to identify valid sources or targets for network firewalls.`, + }, + + "service_account": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + Description: `The service account to be used by the Node VMs. If not specified, the "default" service account is used.`, + }, + + "service_account_scopes": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + Description: `The set of Google API scopes to be made available on all of the node VMs under the service_account specified. These can be either FQDNs, or scope aliases.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + StateFunc: func(v interface{}) string { + return tpgresource.CanonicalizeServiceScope(v.(string)) + }, + }, + Set: tpgresource.StringScopeHashcode, + }, + + "internal_ip_only": { + Type: schema.TypeBool, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + ForceNew: true, + Default: false, + Description: `By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. If set to true, all instances in the cluster will only have internal IP addresses. Note: Private Google Access (also known as privateIpGoogleAccess) must be enabled on the subnetwork that the cluster will be launched in.`, + }, + + "metadata": { + Type: schema.TypeMap, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + Elem: &schema.Schema{Type: schema.TypeString}, + ForceNew: true, + Description: `A map of the Compute Engine metadata entries to add to all instances`, + }, + + "shielded_instance_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + Computed: true, + MaxItems: 1, + Description: `Shielded Instance Config for clusters using Compute Engine Shielded VMs.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_secure_boot": { + Type: schema.TypeBool, + Optional: true, + Default: false, + AtLeastOneOf: schieldedInstanceConfigKeys, + ForceNew: true, + Description: `Defines whether instances have Secure Boot enabled.`, + }, + "enable_vtpm": { + Type: schema.TypeBool, + Optional: true, + Default: false, + AtLeastOneOf: schieldedInstanceConfigKeys, + ForceNew: true, + Description: `Defines whether instances have the vTPM enabled.`, + }, + "enable_integrity_monitoring": { + Type: schema.TypeBool, + Optional: true, + Default: false, + AtLeastOneOf: schieldedInstanceConfigKeys, + ForceNew: true, + Description: `Defines whether instances have integrity monitoring enabled.`, + }, + }, + }, + }, + + "reservation_affinity": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + Computed: true, + MaxItems: 1, + Description: `Reservation Affinity for consuming Zonal reservation.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "consume_reservation_type": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: reservationAffinityKeys, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{"NO_RESERVATION", "ANY_RESERVATION", "SPECIFIC_RESERVATION"}, false), + Description: `Type of reservation to consume.`, + }, + "key": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: reservationAffinityKeys, + ForceNew: true, + Description: `Corresponds to the label key of reservation resource.`, + }, + "values": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + AtLeastOneOf: reservationAffinityKeys, + ForceNew: true, + Description: `Corresponds to the label values of reservation resource.`, + }, + }, + }, + }, + + "node_group_affinity": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: gceClusterConfigKeys, + Computed: true, + MaxItems: 1, + Description: `Node Group Affinity for sole-tenant clusters.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "node_group_uri": { + Type: schema.TypeString, + ForceNew: true, + Required: true, + Description: `The URI of a sole-tenant that the cluster will be created on.`, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + }, + }, + }, + }, + }, + }, + }, + + "master_config": instanceConfigSchema("master_config"), + "worker_config": instanceConfigSchema("worker_config"), + // preemptible_worker_config has a slightly different config + "preemptible_worker_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + Computed: true, + MaxItems: 1, + Description: `The Google Compute Engine config settings for the additional (aka preemptible) instances in a cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "num_instances": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + Description: `Specifies the number of preemptible nodes to create. Defaults to 0.`, + AtLeastOneOf: []string{ + "cluster_config.0.preemptible_worker_config.0.num_instances", + "cluster_config.0.preemptible_worker_config.0.preemptibility", + "cluster_config.0.preemptible_worker_config.0.disk_config", + }, + }, + + // API does not honour this if set ... + // It always uses whatever is specified for the worker_config + // "machine_type": { ... } + // "min_cpu_platform": { ... } + "preemptibility": { + Type: schema.TypeString, + Optional: true, + Description: `Specifies the preemptibility of the secondary nodes. Defaults to PREEMPTIBLE.`, + AtLeastOneOf: []string{ + "cluster_config.0.preemptible_worker_config.0.num_instances", + "cluster_config.0.preemptible_worker_config.0.preemptibility", + "cluster_config.0.preemptible_worker_config.0.disk_config", + }, + ForceNew: true, + ValidateFunc: validation.StringInSlice([]string{"PREEMPTIBILITY_UNSPECIFIED", "NON_PREEMPTIBLE", "PREEMPTIBLE", "SPOT"}, false), + Default: "PREEMPTIBLE", + }, + + "disk_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + Description: `Disk Config`, + AtLeastOneOf: []string{ + "cluster_config.0.preemptible_worker_config.0.num_instances", + "cluster_config.0.preemptible_worker_config.0.preemptibility", + "cluster_config.0.preemptible_worker_config.0.disk_config", + }, + MaxItems: 1, + + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "num_local_ssds": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + AtLeastOneOf: preemptibleWorkerDiskConfigKeys, + ForceNew: true, + Description: `The amount of local SSD disks that will be attached to each preemptible worker node. Defaults to 0.`, + }, + + "boot_disk_size_gb": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + AtLeastOneOf: preemptibleWorkerDiskConfigKeys, + ForceNew: true, + ValidateFunc: validation.IntAtLeast(10), + Description: `Size of the primary disk attached to each preemptible worker node, specified in GB. The smallest allowed disk size is 10GB. GCP will default to a predetermined computed value if not set (currently 500GB). Note: If SSDs are not attached, it also contains the HDFS data blocks and Hadoop working directories.`, + }, + + "boot_disk_type": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: preemptibleWorkerDiskConfigKeys, + ForceNew: true, + Default: "pd-standard", + Description: `The disk type of the primary disk attached to each preemptible worker node. Such as "pd-ssd" or "pd-standard". Defaults to "pd-standard".`, + }, + }, + }, + }, + + "instance_names": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `List of preemptible instance names which have been assigned to the cluster.`, + }, + }, + }, + }, + + "security_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `Security related configuration.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "kerberos_config": { + Type: schema.TypeList, + Required: true, + Description: "Kerberos related configuration", + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cross_realm_trust_admin_server": { + Type: schema.TypeString, + Optional: true, + Description: `The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.`, + }, + "cross_realm_trust_kdc": { + Type: schema.TypeString, + Optional: true, + Description: `The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.`, + }, + "cross_realm_trust_realm": { + Type: schema.TypeString, + Optional: true, + Description: `The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.`, + }, + "cross_realm_trust_shared_password_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster +Kerberos realm and the remote trusted realm, in a cross realm trust relationship.`, + }, + "enable_kerberos": { + Type: schema.TypeBool, + Optional: true, + Description: `Flag to indicate whether to Kerberize the cluster.`, + }, + "kdc_db_key_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.`, + }, + "key_password_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.`, + }, + "keystore_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.`, + }, + "keystore_password_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of a KMS encrypted file containing +the password to the user provided keystore. For the self-signed certificate, this password is generated +by Dataproc`, + }, + "kms_key_uri": { + Type: schema.TypeString, + Required: true, + Description: `The uri of the KMS key used to encrypt various sensitive files.`, + }, + "realm": { + Type: schema.TypeString, + Optional: true, + Description: `The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.`, + }, + "root_principal_password_uri": { + Type: schema.TypeString, + Required: true, + Description: `The cloud Storage URI of a KMS encrypted file containing the root principal password.`, + }, + "tgt_lifetime_hours": { + Type: schema.TypeInt, + Optional: true, + Description: `The lifetime of the ticket granting ticket, in hours.`, + }, + "truststore_password_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.`, + }, + "truststore_uri": { + Type: schema.TypeString, + Optional: true, + Description: `The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.`, + }, + }, + }, + }, + }, + }, + }, + + "software_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + Computed: true, + MaxItems: 1, + Description: `The config settings for software inside the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "image_version": { + Type: schema.TypeString, + Optional: true, + Computed: true, + AtLeastOneOf: clusterSoftwareConfigKeys, + ForceNew: true, + DiffSuppressFunc: dataprocImageVersionDiffSuppress, + Description: `The Cloud Dataproc image version to use for the cluster - this controls the sets of software versions installed onto the nodes when you create clusters. If not specified, defaults to the latest version.`, + }, + "override_properties": { + Type: schema.TypeMap, + Optional: true, + AtLeastOneOf: clusterSoftwareConfigKeys, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A list of override and additional properties (key/value pairs) used to modify various aspects of the common configuration files used when creating a cluster.`, + }, + + "properties": { + Type: schema.TypeMap, + Computed: true, + Description: `A list of the properties used to set the daemon config files. This will include any values supplied by the user via cluster_config.software_config.override_properties`, + }, + + // We have two versions of the properties field here because by default + // dataproc will set a number of default properties for you out of the + // box. If you want to override one or more, if we only had one field, + // you would need to add in all these values as well otherwise you would + // get a diff. To make this easier, 'properties' simply contains the computed + // values (including overrides) for all properties, whilst override_properties + // is only for properties the user specifically wants to override. If nothing + // is overridden, this will be empty. + + "optional_components": { + Type: schema.TypeSet, + Optional: true, + AtLeastOneOf: clusterSoftwareConfigKeys, + Description: `The set of optional components to activate on the cluster.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + + "initialization_action": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + ForceNew: true, + Description: `Commands to execute on each node after config is completed. You can specify multiple versions of these.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "script": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The script to be executed during initialization of the cluster. The script must be a GCS file with a gs:// prefix.`, + }, + + "timeout_sec": { + Type: schema.TypeInt, + Optional: true, + Default: 300, + ForceNew: true, + Description: `The maximum duration (in seconds) which script is allowed to take to execute its action. GCP will default to a predetermined computed value if not set (currently 300).`, + }, + }, + }, + }, + "encryption_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + MaxItems: 1, + Description: `The Customer managed encryption keys settings for the cluster.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "kms_key_name": { + Type: schema.TypeString, + Required: true, + Description: `The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.`, + }, + }, + }, + }, + "autoscaling_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + MaxItems: 1, + Description: `The autoscaling policy config associated with the cluster.`, + DiffSuppressFunc: tpgresource.EmptyOrUnsetBlockDiffSuppress, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "policy_uri": { + Type: schema.TypeString, + Required: true, + Description: `The autoscaling policy used by the cluster.`, + DiffSuppressFunc: tpgresource.LocationDiffSuppress, + }, + }, + }, + }, + "metastore_config": { + Type: schema.TypeList, + Optional: true, + AtLeastOneOf: clusterConfigKeys, + MaxItems: 1, + Description: `Specifies a Metastore configuration.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dataproc_metastore_service": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `Resource name of an existing Dataproc Metastore service.`, + }, + }, + }, + }, + "lifecycle_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + AtLeastOneOf: clusterConfigKeys, + Description: `The settings for auto deletion cluster schedule.`, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "idle_delete_ttl": { + Type: schema.TypeString, + Optional: true, + Description: `The duration to keep the cluster alive while idling (no jobs running). After this TTL, the cluster will be deleted. Valid range: [10m, 14d].`, + AtLeastOneOf: []string{ + "cluster_config.0.lifecycle_config.0.idle_delete_ttl", + "cluster_config.0.lifecycle_config.0.auto_delete_time", + }, + }, + "idle_start_time": { + Type: schema.TypeString, + Computed: true, + Description: `Time when the cluster became idle (most recent job finished) and became eligible for deletion due to idleness.`, + }, + // the API also has the auto_delete_ttl option in its request, however, + // the value is not returned in the response, rather the auto_delete_time + // after calculating ttl with the update time is returned, thus, for now + // we will only allow auto_delete_time to updated. + "auto_delete_time": { + Type: schema.TypeString, + Optional: true, + Description: `The time when cluster will be auto-deleted. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, + DiffSuppressFunc: tpgresource.TimestampDiffSuppress(time.RFC3339Nano), + AtLeastOneOf: []string{ + "cluster_config.0.lifecycle_config.0.idle_delete_ttl", + "cluster_config.0.lifecycle_config.0.auto_delete_time", + }, + }, + }, + }, + }, + "endpoint_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Description: `The config settings for port access on the cluster. Structure defined below.`, + AtLeastOneOf: clusterConfigKeys, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_http_port_access": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + Description: `The flag to enable http access to specific ports on the cluster from external sources (aka Component Gateway). Defaults to false.`, + }, + "http_ports": { + Type: schema.TypeMap, + Computed: true, + Description: `The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.`, + }, + }, + }, + }, + + "dataproc_metric_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Description: `The config for Dataproc metrics.`, + AtLeastOneOf: clusterConfigKeys, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "metrics": { + Type: schema.TypeList, + Required: true, + Description: `Metrics sources to enable.`, + Elem: metricsSchema(), + }, + }, + }, + }, + }, + }, + }, + }, + UseJSONNumber: true, + } +} + +func ResourceDataprocClusterStateUpgradeV0(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + return tpgresource.LabelsStateUpgrade(rawState, resourceDataprocGoogleLabelPrefix) +} diff --git a/google/services/dataproc/resource_dataproc_cluster_test.go b/google/services/dataproc/resource_dataproc_cluster_test.go index fede6e80cf1..9a44cebd6ff 100644 --- a/google/services/dataproc/resource_dataproc_cluster_test.go +++ b/google/services/dataproc/resource_dataproc_cluster_test.go @@ -685,14 +685,46 @@ func TestAccDataprocCluster_withLabels(t *testing.T) { ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), CheckDestroy: testAccCheckDataprocClusterDestroy(t), Steps: []resource.TestStep{ + { + Config: testAccDataprocCluster_withoutLabels(rnd), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckNoResourceAttr("google_dataproc_cluster.with_labels", "labels.%"), + // We don't provide any, but GCP adds three and goog-dataproc-autozone is added internally, so expect 4. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "4"), + ), + }, { Config: testAccDataprocCluster_withLabels(rnd), Check: resource.ComposeTestCheckFunc( testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), - // We only provide one, but GCP adds three and we added goog-dataproc-autozone internally, so expect 5. - resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key1", "value1"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 5. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.key1", "value1"), + ), + }, + { + Config: testAccDataprocCluster_withLabelsUpdate(rnd), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + // We only provide two, so expect 2. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key2", "value2"), + ), + }, + { + Config: testAccDataprocCluster_withoutLabels(rnd), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckNoResourceAttr("google_dataproc_cluster.with_labels", "labels.%"), + // We don't provide any, but GCP adds three and goog-dataproc-autozone is added internally, so expect 4. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "4"), ), }, }, @@ -1129,6 +1161,7 @@ resource "google_container_cluster" "primary" { workload_identity_config { workload_pool = "${data.google_project.project.project_id}.svc.id.goog" } + deletion_protection = false } resource "google_project_iam_binding" "workloadidentity" { @@ -1392,7 +1425,7 @@ resource "google_compute_node_group" "nodes" { name = "test-nodegroup-%s" zone = "us-central1-f" - size = 3 + initial_size = 3 node_template = google_compute_node_template.nodetmpl.self_link } @@ -1718,6 +1751,28 @@ resource "google_dataproc_cluster" "with_labels" { `, rnd) } +func testAccDataprocCluster_withLabelsUpdate(rnd string) string { + return fmt.Sprintf(` +resource "google_dataproc_cluster" "with_labels" { + name = "tf-test-dproc-%s" + region = "us-central1" + + labels = { + key2 = "value2" + } +} +`, rnd) +} + +func testAccDataprocCluster_withoutLabels(rnd string) string { + return fmt.Sprintf(` +resource "google_dataproc_cluster" "with_labels" { + name = "tf-test-dproc-%s" + region = "us-central1" +} +`, rnd) +} + func testAccDataprocCluster_withEndpointConfig(rnd string) string { return fmt.Sprintf(` resource "google_dataproc_cluster" "with_endpoint_config" { diff --git a/google/services/dataproc/resource_dataproc_cluster_upgrade_test.go b/google/services/dataproc/resource_dataproc_cluster_upgrade_test.go new file mode 100644 index 00000000000..30bbb5d3a19 --- /dev/null +++ b/google/services/dataproc/resource_dataproc_cluster_upgrade_test.go @@ -0,0 +1,214 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package dataproc_test + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + + "google.golang.org/api/dataproc/v1" + + "github.com/hashicorp/terraform-provider-google/google/acctest" +) + +// Tests schema version migration by creating a cluster with an old version of the provider (4.65.0) +// and then updating it with the current version the provider. +func TestAccDataprocClusterLabelsMigration_withoutLabels_withoutChanges(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + rnd := acctest.RandString(t, 10) + var cluster dataproc.Cluster + oldVersion := map[string]resource.ExternalProvider{ + "google": { + VersionConstraint: "4.65.0", // a version that doesn't separate user defined labels and system labels + Source: "registry.terraform.io/hashicorp/google", + }, + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + CheckDestroy: testAccCheckDataprocClusterDestroy(t), + Steps: []resource.TestStep{ + { + Config: testAccDataprocCluster_withoutLabels(rnd), + ExternalProviders: oldVersion, + }, + { + Config: testAccDataprocCluster_withoutLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckNoResourceAttr("google_dataproc_cluster.with_labels", "labels.%"), + // GCP adds three and goog-dataproc-autozone is added internally, so expect 4. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "4"), + ), + }, + { + Config: testAccDataprocCluster_withLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key1", "value1"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 5. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.key1", "value1"), + ), + }, + }, + }) +} + +func TestAccDataprocClusterLabelsMigration_withLabels_withoutChanges(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + rnd := acctest.RandString(t, 10) + var cluster dataproc.Cluster + oldVersion := map[string]resource.ExternalProvider{ + "google": { + VersionConstraint: "4.65.0", // a version that doesn't separate user defined labels and system labels + Source: "registry.terraform.io/hashicorp/google", + }, + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + CheckDestroy: testAccCheckDataprocClusterDestroy(t), + Steps: []resource.TestStep{ + { + Config: testAccDataprocCluster_withLabels(rnd), + ExternalProviders: oldVersion, + }, + { + Config: testAccDataprocCluster_withLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key1", "value1"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 5. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.key1", "value1"), + ), + }, + { + Config: testAccDataprocCluster_withLabelsUpdate(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + // We only provide one, so expect 1. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key2", "value2"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 5. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccDataprocClusterLabelsMigration_withUpdate(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + rnd := acctest.RandString(t, 10) + var cluster dataproc.Cluster + oldVersion := map[string]resource.ExternalProvider{ + "google": { + VersionConstraint: "4.65.0", // a version that doesn't separate user defined labels and system labels + Source: "registry.terraform.io/hashicorp/google", + }, + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + CheckDestroy: testAccCheckDataprocClusterDestroy(t), + Steps: []resource.TestStep{ + { + Config: testAccDataprocCluster_withoutLabels(rnd), + ExternalProviders: oldVersion, + }, + { + Config: testAccDataprocCluster_withLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key1", "value1"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 5. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.key1", "value1"), + ), + }, + { + Config: testAccDataprocCluster_withoutLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckNoResourceAttr("google_dataproc_cluster.with_labels", "labels.%"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 4. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "4"), + ), + }, + }, + }) +} + +func TestAccDataprocClusterLabelsMigration_withRemoval(t *testing.T) { + acctest.SkipIfVcr(t) + t.Parallel() + + rnd := acctest.RandString(t, 10) + var cluster dataproc.Cluster + oldVersion := map[string]resource.ExternalProvider{ + "google": { + VersionConstraint: "4.65.0", // a version that doesn't separate user defined labels and system labels + Source: "registry.terraform.io/hashicorp/google", + }, + } + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + CheckDestroy: testAccCheckDataprocClusterDestroy(t), + Steps: []resource.TestStep{ + { + Config: testAccDataprocCluster_withLabels(rnd), + ExternalProviders: oldVersion, + }, + { + Config: testAccDataprocCluster_withoutLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckNoResourceAttr("google_dataproc_cluster.with_labels", "labels.%"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 4. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "4"), + ), + }, + { + Config: testAccDataprocCluster_withLabels(rnd), + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + Check: resource.ComposeTestCheckFunc( + testAccCheckDataprocClusterExists(t, "google_dataproc_cluster.with_labels", &cluster), + + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.%", "1"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "labels.key1", "value1"), + // We only provide one, but GCP adds three and goog-dataproc-autozone is added internally, so expect 5. + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.%", "5"), + resource.TestCheckResourceAttr("google_dataproc_cluster.with_labels", "effective_labels.key1", "value1"), + ), + }, + }, + }) +} diff --git a/google/services/dataproc/resource_dataproc_job.go b/google/services/dataproc/resource_dataproc_job.go index 095d1c167be..99dbe19e1bb 100644 --- a/google/services/dataproc/resource_dataproc_job.go +++ b/google/services/dataproc/resource_dataproc_job.go @@ -12,6 +12,7 @@ import ( transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/terraform-provider-google/google/verify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "google.golang.org/api/dataproc/v1" @@ -31,6 +32,11 @@ func ResourceDataprocJob() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "project": { Type: schema.TypeString, @@ -144,10 +150,28 @@ func ResourceDataprocJob() *schema.Resource { }, "labels": { + Type: schema.TypeMap, + Description: `Optional. The labels to associate with this job. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "terraform_labels": { Type: schema.TypeMap, - Description: "Optional. The labels to associate with this job.", - Optional: true, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, }, @@ -231,8 +255,8 @@ func resourceDataprocJobCreate(d *schema.ResourceData, meta interface{}) error { submitReq.Job.Scheduling = expandJobScheduling(config) } - if _, ok := d.GetOk("labels"); ok { - submitReq.Job.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + submitReq.Job.Labels = tpgresource.ExpandEffectiveLabels(d) } if v, ok := d.GetOk("pyspark_config"); ok { @@ -312,9 +336,15 @@ func resourceDataprocJobRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("force_delete", d.Get("force_delete")); err != nil { return fmt.Errorf("Error setting force_delete: %s", err) } - if err := d.Set("labels", job.Labels); err != nil { + if err := tpgresource.SetLabels(job.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(job.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", job.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if err := d.Set("driver_output_resource_uri", job.DriverOutputResourceUri); err != nil { return fmt.Errorf("Error setting driver_output_resource_uri: %s", err) } diff --git a/google/services/dataproc/resource_dataproc_workflow_template.go b/google/services/dataproc/resource_dataproc_workflow_template.go index cd4ee0e9295..301ca01e8b6 100644 --- a/google/services/dataproc/resource_dataproc_workflow_template.go +++ b/google/services/dataproc/resource_dataproc_workflow_template.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,10 @@ func ResourceDataprocWorkflowTemplate() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "jobs": { @@ -88,12 +93,11 @@ func ResourceDataprocWorkflowTemplate() *schema.Resource { Description: "Optional. Timeout duration for the DAG of jobs, expressed in seconds (see [JSON representation of duration](https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes (\"600s\") to 24 hours (\"86400s\"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a [managed cluster](/dataproc/docs/concepts/workflows/using-workflows#configuring_or_selecting_a_cluster), the cluster is deleted.", }, - "labels": { + "effective_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, ForceNew: true, - Description: "Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance. Label **keys** must contain 1 to 63 characters, and must conform to [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt). Label **values** may be empty, but, if present, must contain 1 to 63 characters, and must conform to [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a template.", - Elem: &schema.Schema{Type: schema.TypeString}, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "parameters": { @@ -128,6 +132,20 @@ func ResourceDataprocWorkflowTemplate() *schema.Resource { Description: "Output only. The time template was created.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: "Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance. Label **keys** must contain 1 to 63 characters, and must conform to [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt). Label **values** may be empty, but, if present, must contain 1 to 63 characters, and must conform to [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a template.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "update_time": { Type: schema.TypeString, Computed: true, @@ -2090,7 +2108,7 @@ func resourceDataprocWorkflowTemplateCreate(d *schema.ResourceData, meta interfa Name: dcl.String(d.Get("name").(string)), Placement: expandDataprocWorkflowTemplatePlacement(d.Get("placement")), DagTimeout: dcl.String(d.Get("dag_timeout").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Parameters: expandDataprocWorkflowTemplateParametersArray(d.Get("parameters")), Project: dcl.String(project), Version: dcl.Int64OrNil(int64(d.Get("version").(int))), @@ -2146,7 +2164,7 @@ func resourceDataprocWorkflowTemplateRead(d *schema.ResourceData, meta interface Name: dcl.String(d.Get("name").(string)), Placement: expandDataprocWorkflowTemplatePlacement(d.Get("placement")), DagTimeout: dcl.String(d.Get("dag_timeout").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Parameters: expandDataprocWorkflowTemplateParametersArray(d.Get("parameters")), Project: dcl.String(project), Version: dcl.Int64OrNil(int64(d.Get("version").(int))), @@ -2189,8 +2207,8 @@ func resourceDataprocWorkflowTemplateRead(d *schema.ResourceData, meta interface if err = d.Set("dag_timeout", res.DagTimeout); err != nil { return fmt.Errorf("error setting dag_timeout in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("parameters", flattenDataprocWorkflowTemplateParametersArray(res.Parameters)); err != nil { return fmt.Errorf("error setting parameters in state: %s", err) @@ -2204,6 +2222,12 @@ func resourceDataprocWorkflowTemplateRead(d *schema.ResourceData, meta interface if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenDataprocWorkflowTemplateLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } + if err = d.Set("terraform_labels", flattenDataprocWorkflowTemplateTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("update_time", res.UpdateTime); err != nil { return fmt.Errorf("error setting update_time in state: %s", err) } @@ -2224,7 +2248,7 @@ func resourceDataprocWorkflowTemplateDelete(d *schema.ResourceData, meta interfa Name: dcl.String(d.Get("name").(string)), Placement: expandDataprocWorkflowTemplatePlacement(d.Get("placement")), DagTimeout: dcl.String(d.Get("dag_timeout").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Parameters: expandDataprocWorkflowTemplateParametersArray(d.Get("parameters")), Project: dcl.String(project), Version: dcl.Int64OrNil(int64(d.Get("version").(int))), @@ -4082,6 +4106,37 @@ func flattenDataprocWorkflowTemplateParametersValidationValues(obj *dataproc.Wor return []interface{}{transformed} } + +func flattenDataprocWorkflowTemplateLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenDataprocWorkflowTemplateTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + func flattenDataprocWorkflowTemplatePlacementManagedClusterConfigSoftwareConfigOptionalComponentsArray(obj []dataproc.WorkflowTemplatePlacementManagedClusterConfigSoftwareConfigOptionalComponentsEnum) interface{} { if obj == nil { return nil diff --git a/google/services/dataproc/resource_dataproc_workflow_template_test.go b/google/services/dataproc/resource_dataproc_workflow_template_test.go index e72bbc13617..ecc073d6f30 100644 --- a/google/services/dataproc/resource_dataproc_workflow_template_test.go +++ b/google/services/dataproc/resource_dataproc_workflow_template_test.go @@ -38,7 +38,11 @@ func TestAccDataprocWorkflowTemplate_basic(t *testing.T) { { ImportState: true, ImportStateVerify: true, - ResourceName: "google_dataproc_workflow_template.template", + // The "labels" field in the state are decided by the configuration. + // During importing, as the configuration is unavailable, the "labels" field in the state will be empty. + // So add the "labels" to the ImportStateVerifyIgnore list. + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, + ResourceName: "google_dataproc_workflow_template.template", }, }, }) @@ -125,6 +129,11 @@ resource "google_dataproc_workflow_template" "template" { query_file_uri = "someuri" } } + + labels = { + env = "foo" + somekey = "somevalue" + } } `, context) } diff --git a/google/services/dataprocmetastore/data_source_dataproc_metastore_service.go b/google/services/dataprocmetastore/data_source_dataproc_metastore_service.go index 52099820f84..f685a5f4fa9 100644 --- a/google/services/dataprocmetastore/data_source_dataproc_metastore_service.go +++ b/google/services/dataprocmetastore/data_source_dataproc_metastore_service.go @@ -29,5 +29,17 @@ func dataSourceDataprocMetastoreServiceRead(d *schema.ResourceData, meta interfa return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceDataprocMetastoreServiceRead(d, meta) + err = resourceDataprocMetastoreServiceRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/dataprocmetastore/data_source_dataproc_metastore_service_test.go b/google/services/dataprocmetastore/data_source_dataproc_metastore_service_test.go index f41bf381034..d119d87f518 100644 --- a/google/services/dataprocmetastore/data_source_dataproc_metastore_service_test.go +++ b/google/services/dataprocmetastore/data_source_dataproc_metastore_service_test.go @@ -39,6 +39,10 @@ resource "google_dataproc_metastore_service" "my_metastore" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } data "google_dataproc_metastore_service" "my_metastore" { diff --git a/google/services/dataprocmetastore/iam_dataproc_metastore_service_generated_test.go b/google/services/dataprocmetastore/iam_dataproc_metastore_service_generated_test.go index cea93b75cd8..55e75f53111 100644 --- a/google/services/dataprocmetastore/iam_dataproc_metastore_service_generated_test.go +++ b/google/services/dataprocmetastore/iam_dataproc_metastore_service_generated_test.go @@ -139,6 +139,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } resource "google_dataproc_metastore_service_iam_member" "foo" { @@ -167,6 +171,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } data "google_iam_policy" "foo" { @@ -210,6 +218,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } data "google_iam_policy" "foo" { @@ -240,6 +252,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } resource "google_dataproc_metastore_service_iam_binding" "foo" { @@ -268,6 +284,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } resource "google_dataproc_metastore_service_iam_binding" "foo" { diff --git a/google/services/dataprocmetastore/resource_dataproc_metastore_service.go b/google/services/dataprocmetastore/resource_dataproc_metastore_service.go index 089600dd2d5..79b0ef5ee6f 100644 --- a/google/services/dataprocmetastore/resource_dataproc_metastore_service.go +++ b/google/services/dataprocmetastore/resource_dataproc_metastore_service.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceDataprocMetastoreService() *schema.Resource { Delete: schema.DefaultTimeout(60 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "service_id": { Type: schema.TypeString, @@ -146,10 +152,13 @@ The mappings override system defaults (some keys cannot be overridden)`, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `User-defined labels for the metastore service.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels for the metastore service. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { Type: schema.TypeString, @@ -293,6 +302,12 @@ There must be at least one IP address available in the subnet's primary range. T Computed: true, Description: `A Cloud Storage URI (starting with gs://) that specifies where artifacts related to the metastore service are stored.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "endpoint_uri": { Type: schema.TypeString, Computed: true, @@ -313,6 +328,13 @@ There must be at least one IP address available in the subnet's primary range. T Computed: true, Description: `Additional information about the current state of the metastore service, if available.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -337,12 +359,6 @@ func resourceDataprocMetastoreServiceCreate(d *schema.ResourceData, meta interfa } obj := make(map[string]interface{}) - labelsProp, err := expandDataprocMetastoreServiceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } networkProp, err := expandDataprocMetastoreServiceNetwork(d.Get("network"), d, config) if err != nil { return err @@ -409,6 +425,12 @@ func resourceDataprocMetastoreServiceCreate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("telemetry_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(telemetryConfigProp)) && (ok || !reflect.DeepEqual(v, telemetryConfigProp)) { obj["telemetryConfig"] = telemetryConfigProp } + labelsProp, err := expandDataprocMetastoreServiceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataprocMetastoreBasePath}}projects/{{project}}/locations/{{location}}/services?serviceId={{service_id}}") if err != nil { @@ -558,6 +580,12 @@ func resourceDataprocMetastoreServiceRead(d *schema.ResourceData, meta interface if err := d.Set("telemetry_config", flattenDataprocMetastoreServiceTelemetryConfig(res["telemetryConfig"], d, config)); err != nil { return fmt.Errorf("Error reading Service: %s", err) } + if err := d.Set("terraform_labels", flattenDataprocMetastoreServiceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Service: %s", err) + } + if err := d.Set("effective_labels", flattenDataprocMetastoreServiceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Service: %s", err) + } return nil } @@ -578,12 +606,6 @@ func resourceDataprocMetastoreServiceUpdate(d *schema.ResourceData, meta interfa billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandDataprocMetastoreServiceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } portProp, err := expandDataprocMetastoreServicePort(d.Get("port"), d, config) if err != nil { return err @@ -626,6 +648,12 @@ func resourceDataprocMetastoreServiceUpdate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("telemetry_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, telemetryConfigProp)) { obj["telemetryConfig"] = telemetryConfigProp } + labelsProp, err := expandDataprocMetastoreServiceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DataprocMetastoreBasePath}}projects/{{project}}/locations/{{location}}/services/{{service_id}}") if err != nil { @@ -635,10 +663,6 @@ func resourceDataprocMetastoreServiceUpdate(d *schema.ResourceData, meta interfa log.Printf("[DEBUG] Updating Service %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("port") { updateMask = append(updateMask, "port") } @@ -666,6 +690,10 @@ func resourceDataprocMetastoreServiceUpdate(d *schema.ResourceData, meta interfa if d.HasChange("telemetry_config") { updateMask = append(updateMask, "telemetryConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -761,9 +789,9 @@ func resourceDataprocMetastoreServiceDelete(d *schema.ResourceData, meta interfa func resourceDataprocMetastoreServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/services/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/services/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -783,7 +811,18 @@ func flattenDataprocMetastoreServiceName(v interface{}, d *schema.ResourceData, } func flattenDataprocMetastoreServiceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDataprocMetastoreServiceNetwork(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1039,15 +1078,23 @@ func flattenDataprocMetastoreServiceTelemetryConfigLogFormat(v interface{}, d *s return v } -func expandDataprocMetastoreServiceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenDataprocMetastoreServiceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenDataprocMetastoreServiceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandDataprocMetastoreServiceNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -1351,3 +1398,14 @@ func expandDataprocMetastoreServiceTelemetryConfig(v interface{}, d tpgresource. func expandDataprocMetastoreServiceTelemetryConfigLogFormat(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDataprocMetastoreServiceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/dataprocmetastore/resource_dataproc_metastore_service_generated_test.go b/google/services/dataprocmetastore/resource_dataproc_metastore_service_generated_test.go index e14b72aabac..c919bf59274 100644 --- a/google/services/dataprocmetastore/resource_dataproc_metastore_service_generated_test.go +++ b/google/services/dataprocmetastore/resource_dataproc_metastore_service_generated_test.go @@ -49,7 +49,7 @@ func TestAccDataprocMetastoreService_dataprocMetastoreServiceBasicExample(t *tes ResourceName: "google_dataproc_metastore_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"service_id", "location"}, + ImportStateVerifyIgnore: []string{"service_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -71,6 +71,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } `, context) } @@ -94,7 +98,7 @@ func TestAccDataprocMetastoreService_dataprocMetastoreServiceCmekTestExample(t * ResourceName: "google_dataproc_metastore_service.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"service_id", "location"}, + ImportStateVerifyIgnore: []string{"service_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -165,7 +169,7 @@ func TestAccDataprocMetastoreService_dataprocMetastoreServiceTelemetryExample(t ResourceName: "google_dataproc_metastore_service.telemetry", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"service_id", "location"}, + ImportStateVerifyIgnore: []string{"service_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -209,7 +213,7 @@ func TestAccDataprocMetastoreService_dataprocMetastoreServiceDpms2Example(t *tes ResourceName: "google_dataproc_metastore_service.dpms2", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"service_id", "location"}, + ImportStateVerifyIgnore: []string{"service_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -255,7 +259,7 @@ func TestAccDataprocMetastoreService_dataprocMetastoreServiceDpms2ScalingFactorE ResourceName: "google_dataproc_metastore_service.dpms2_scaling_factor", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"service_id", "location"}, + ImportStateVerifyIgnore: []string{"service_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -301,7 +305,7 @@ func TestAccDataprocMetastoreService_dataprocMetastoreServiceDpms2ScalingFactorL ResourceName: "google_dataproc_metastore_service.dpms2_scaling_factor_lt1", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"service_id", "location"}, + ImportStateVerifyIgnore: []string{"service_id", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/datastore/resource_datastore_index.go b/google/services/datastore/resource_datastore_index.go index 59a2283dc83..0df6f65cddf 100644 --- a/google/services/datastore/resource_datastore_index.go +++ b/google/services/datastore/resource_datastore_index.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceDatastoreIndex() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "kind": { Type: schema.TypeString, @@ -310,9 +315,9 @@ func resourceDatastoreIndexDelete(d *schema.ResourceData, meta interface{}) erro func resourceDatastoreIndexImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/indexes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/indexes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/datastream/resource_datastream_connection_profile.go b/google/services/datastream/resource_datastream_connection_profile.go index 6f99d42a04d..3c6dfd09ff8 100644 --- a/google/services/datastream/resource_datastream_connection_profile.go +++ b/google/services/datastream/resource_datastream_connection_profile.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceDatastreamConnectionProfile() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "connection_profile_id": { Type: schema.TypeString, @@ -140,10 +146,13 @@ func ResourceDatastreamConnectionProfile() *schema.Resource { ExactlyOneOf: []string{"oracle_profile", "gcs_profile", "mysql_profile", "bigquery_profile", "postgresql_profile"}, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "mysql_profile": { Type: schema.TypeList, @@ -328,11 +337,24 @@ If this field is used then the 'client_certificate' and the }, ConflictsWith: []string{"forward_ssh_connectivity"}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, Description: `The resource's name.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -352,12 +374,6 @@ func resourceDatastreamConnectionProfileCreate(d *schema.ResourceData, meta inte } obj := make(map[string]interface{}) - labelsProp, err := expandDatastreamConnectionProfileLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } displayNameProp, err := expandDatastreamConnectionProfileDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -406,6 +422,12 @@ func resourceDatastreamConnectionProfileCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("private_connectivity"); !tpgresource.IsEmptyValue(reflect.ValueOf(privateConnectivityProp)) && (ok || !reflect.DeepEqual(v, privateConnectivityProp)) { obj["privateConnectivity"] = privateConnectivityProp } + labelsProp, err := expandDatastreamConnectionProfileEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DatastreamBasePath}}projects/{{project}}/locations/{{location}}/connectionProfiles?connectionProfileId={{connection_profile_id}}") if err != nil { @@ -545,6 +567,12 @@ func resourceDatastreamConnectionProfileRead(d *schema.ResourceData, meta interf if err := d.Set("private_connectivity", flattenDatastreamConnectionProfilePrivateConnectivity(res["privateConnectivity"], d, config)); err != nil { return fmt.Errorf("Error reading ConnectionProfile: %s", err) } + if err := d.Set("terraform_labels", flattenDatastreamConnectionProfileTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConnectionProfile: %s", err) + } + if err := d.Set("effective_labels", flattenDatastreamConnectionProfileEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConnectionProfile: %s", err) + } return nil } @@ -565,12 +593,6 @@ func resourceDatastreamConnectionProfileUpdate(d *schema.ResourceData, meta inte billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandDatastreamConnectionProfileLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } displayNameProp, err := expandDatastreamConnectionProfileDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -619,6 +641,12 @@ func resourceDatastreamConnectionProfileUpdate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("private_connectivity"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, privateConnectivityProp)) { obj["privateConnectivity"] = privateConnectivityProp } + labelsProp, err := expandDatastreamConnectionProfileEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DatastreamBasePath}}projects/{{project}}/locations/{{location}}/connectionProfiles/{{connection_profile_id}}") if err != nil { @@ -628,10 +656,6 @@ func resourceDatastreamConnectionProfileUpdate(d *schema.ResourceData, meta inte log.Printf("[DEBUG] Updating ConnectionProfile %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("display_name") { updateMask = append(updateMask, "displayName") } @@ -663,6 +687,10 @@ func resourceDatastreamConnectionProfileUpdate(d *schema.ResourceData, meta inte if d.HasChange("private_connectivity") { updateMask = append(updateMask, "privateConnectivity") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -758,9 +786,9 @@ func resourceDatastreamConnectionProfileDelete(d *schema.ResourceData, meta inte func resourceDatastreamConnectionProfileImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/connectionProfiles/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/connectionProfiles/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -780,7 +808,18 @@ func flattenDatastreamConnectionProfileName(v interface{}, d *schema.ResourceDat } func flattenDatastreamConnectionProfileLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDatastreamConnectionProfileDisplayName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1101,15 +1140,23 @@ func flattenDatastreamConnectionProfilePrivateConnectivityPrivateConnection(v in return v } -func expandDatastreamConnectionProfileLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenDatastreamConnectionProfileTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenDatastreamConnectionProfileEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandDatastreamConnectionProfileDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -1547,3 +1594,14 @@ func expandDatastreamConnectionProfilePrivateConnectivity(v interface{}, d tpgre func expandDatastreamConnectionProfilePrivateConnectivityPrivateConnection(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDatastreamConnectionProfileEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/datastream/resource_datastream_connection_profile_generated_test.go b/google/services/datastream/resource_datastream_connection_profile_generated_test.go index 7df4dff2c24..2423616e597 100644 --- a/google/services/datastream/resource_datastream_connection_profile_generated_test.go +++ b/google/services/datastream/resource_datastream_connection_profile_generated_test.go @@ -49,7 +49,7 @@ func TestAccDatastreamConnectionProfile_datastreamConnectionProfileBasicExample( ResourceName: "google_datastream_connection_profile.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -89,7 +89,7 @@ func TestAccDatastreamConnectionProfile_datastreamConnectionProfileBigqueryPriva ResourceName: "google_datastream_connection_profile.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -149,7 +149,7 @@ func TestAccDatastreamConnectionProfile_datastreamConnectionProfileFullExample(t ResourceName: "google_datastream_connection_profile.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "forward_ssh_connectivity.0.password"}, + ImportStateVerifyIgnore: []string{"connection_profile_id", "location", "forward_ssh_connectivity.0.password", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/datastream/resource_datastream_private_connection.go b/google/services/datastream/resource_datastream_private_connection.go index fc1accc015b..f4d4c26e291 100644 --- a/google/services/datastream/resource_datastream_private_connection.go +++ b/google/services/datastream/resource_datastream_private_connection.go @@ -24,6 +24,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -78,6 +79,11 @@ func ResourceDatastreamPrivateConnection() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -123,10 +129,20 @@ Format: projects/{project}/global/{networks}/{name}`, }, }, "labels": { + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Labels. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, ForceNew: true, - Description: `Labels.`, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "error": { @@ -159,6 +175,13 @@ Format: projects/{project}/global/{networks}/{name}`, Computed: true, Description: `State of the PrivateConnection.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -178,12 +201,6 @@ func resourceDatastreamPrivateConnectionCreate(d *schema.ResourceData, meta inte } obj := make(map[string]interface{}) - labelsProp, err := expandDatastreamPrivateConnectionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } displayNameProp, err := expandDatastreamPrivateConnectionDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -196,6 +213,12 @@ func resourceDatastreamPrivateConnectionCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("vpc_peering_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(vpcPeeringConfigProp)) && (ok || !reflect.DeepEqual(v, vpcPeeringConfigProp)) { obj["vpcPeeringConfig"] = vpcPeeringConfigProp } + labelsProp, err := expandDatastreamPrivateConnectionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DatastreamBasePath}}projects/{{project}}/locations/{{location}}/privateConnections?privateConnectionId={{private_connection_id}}") if err != nil { @@ -327,6 +350,12 @@ func resourceDatastreamPrivateConnectionRead(d *schema.ResourceData, meta interf if err := d.Set("vpc_peering_config", flattenDatastreamPrivateConnectionVpcPeeringConfig(res["vpcPeeringConfig"], d, config)); err != nil { return fmt.Errorf("Error reading PrivateConnection: %s", err) } + if err := d.Set("terraform_labels", flattenDatastreamPrivateConnectionTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading PrivateConnection: %s", err) + } + if err := d.Set("effective_labels", flattenDatastreamPrivateConnectionEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading PrivateConnection: %s", err) + } return nil } @@ -387,9 +416,9 @@ func resourceDatastreamPrivateConnectionDelete(d *schema.ResourceData, meta inte func resourceDatastreamPrivateConnectionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/privateConnections/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/privateConnections/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -413,7 +442,18 @@ func flattenDatastreamPrivateConnectionName(v interface{}, d *schema.ResourceDat } func flattenDatastreamPrivateConnectionLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDatastreamPrivateConnectionDisplayName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -470,15 +510,23 @@ func flattenDatastreamPrivateConnectionVpcPeeringConfigSubnet(v interface{}, d * return v } -func expandDatastreamPrivateConnectionLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenDatastreamPrivateConnectionTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenDatastreamPrivateConnectionEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandDatastreamPrivateConnectionDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -518,3 +566,14 @@ func expandDatastreamPrivateConnectionVpcPeeringConfigVpc(v interface{}, d tpgre func expandDatastreamPrivateConnectionVpcPeeringConfigSubnet(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandDatastreamPrivateConnectionEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/datastream/resource_datastream_private_connection_generated_test.go b/google/services/datastream/resource_datastream_private_connection_generated_test.go index fe5bfc6cbb5..db47ff23818 100644 --- a/google/services/datastream/resource_datastream_private_connection_generated_test.go +++ b/google/services/datastream/resource_datastream_private_connection_generated_test.go @@ -49,7 +49,7 @@ func TestAccDatastreamPrivateConnection_datastreamPrivateConnectionFullExample(t ResourceName: "google_datastream_private_connection.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"private_connection_id", "location"}, + ImportStateVerifyIgnore: []string{"private_connection_id", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/datastream/resource_datastream_stream.go b/google/services/datastream/resource_datastream_stream.go index dfc15204af8..f18fcfdef92 100644 --- a/google/services/datastream/resource_datastream_stream.go +++ b/google/services/datastream/resource_datastream_stream.go @@ -120,6 +120,8 @@ func ResourceDatastreamStream() *schema.Resource { CustomizeDiff: customdiff.All( resourceDatastreamStreamCustomDiff, + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -1266,9 +1268,18 @@ https://www.postgresql.org/docs/current/datatype.html`, will be encrypted using an internal Stream-specific encryption key provisioned through KMS.`, }, "labels": { + Type: schema.TypeMap, + Optional: true, + Description: `Labels. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: `Labels.`, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "name": { @@ -1281,6 +1292,13 @@ will be encrypted using an internal Stream-specific encryption key provisioned t Computed: true, Description: `The state of the stream.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "desired_state": { Type: schema.TypeString, Optional: true, @@ -1306,12 +1324,6 @@ func resourceDatastreamStreamCreate(d *schema.ResourceData, meta interface{}) er } obj := make(map[string]interface{}) - labelsProp, err := expandDatastreamStreamLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } displayNameProp, err := expandDatastreamStreamDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -1348,6 +1360,12 @@ func resourceDatastreamStreamCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("customer_managed_encryption_key"); !tpgresource.IsEmptyValue(reflect.ValueOf(customerManagedEncryptionKeyProp)) && (ok || !reflect.DeepEqual(v, customerManagedEncryptionKeyProp)) { obj["customerManagedEncryptionKey"] = customerManagedEncryptionKeyProp } + labelsProp, err := expandDatastreamStreamEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceDatastreamStreamEncoder(d, meta, obj) if err != nil { @@ -1506,6 +1524,12 @@ func resourceDatastreamStreamRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("customer_managed_encryption_key", flattenDatastreamStreamCustomerManagedEncryptionKey(res["customerManagedEncryptionKey"], d, config)); err != nil { return fmt.Errorf("Error reading Stream: %s", err) } + if err := d.Set("terraform_labels", flattenDatastreamStreamTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Stream: %s", err) + } + if err := d.Set("effective_labels", flattenDatastreamStreamEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Stream: %s", err) + } return nil } @@ -1526,12 +1550,6 @@ func resourceDatastreamStreamUpdate(d *schema.ResourceData, meta interface{}) er billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandDatastreamStreamLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } displayNameProp, err := expandDatastreamStreamDisplayName(d.Get("display_name"), d, config) if err != nil { return err @@ -1562,6 +1580,12 @@ func resourceDatastreamStreamUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("backfill_none"); ok || !reflect.DeepEqual(v, backfillNoneProp) { obj["backfillNone"] = backfillNoneProp } + labelsProp, err := expandDatastreamStreamEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceDatastreamStreamEncoder(d, meta, obj) if err != nil { @@ -1576,10 +1600,6 @@ func resourceDatastreamStreamUpdate(d *schema.ResourceData, meta interface{}) er log.Printf("[DEBUG] Updating Stream %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("display_name") { updateMask = append(updateMask, "displayName") } @@ -1599,6 +1619,10 @@ func resourceDatastreamStreamUpdate(d *schema.ResourceData, meta interface{}) er if d.HasChange("backfill_none") { updateMask = append(updateMask, "backfillNone") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -1713,9 +1737,9 @@ func resourceDatastreamStreamDelete(d *schema.ResourceData, meta interface{}) er func resourceDatastreamStreamImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/streams/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/streams/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1743,7 +1767,18 @@ func flattenDatastreamStreamName(v interface{}, d *schema.ResourceData, config * } func flattenDatastreamStreamLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDatastreamStreamDisplayName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -3578,15 +3613,23 @@ func flattenDatastreamStreamCustomerManagedEncryptionKey(v interface{}, d *schem return v } -func expandDatastreamStreamLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenDatastreamStreamTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenDatastreamStreamEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandDatastreamStreamDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -5878,6 +5921,17 @@ func expandDatastreamStreamCustomerManagedEncryptionKey(v interface{}, d tpgreso return v, nil } +func expandDatastreamStreamEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceDatastreamStreamEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { if d.HasChange("desired_state") { obj["state"] = d.Get("desired_state") diff --git a/google/services/datastream/resource_datastream_stream_generated_test.go b/google/services/datastream/resource_datastream_stream_generated_test.go index a59d11417fc..7a08f5bd0ef 100644 --- a/google/services/datastream/resource_datastream_stream_generated_test.go +++ b/google/services/datastream/resource_datastream_stream_generated_test.go @@ -55,7 +55,7 @@ func TestAccDatastreamStream_datastreamStreamBasicExample(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -218,7 +218,7 @@ func TestAccDatastreamStream_datastreamStreamFullExample(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -448,7 +448,7 @@ func TestAccDatastreamStream_datastreamStreamPostgresqlBigqueryDatasetIdExample( ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -591,7 +591,7 @@ func TestAccDatastreamStream_datastreamStreamBigqueryExample(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/datastream/resource_datastream_stream_test.go b/google/services/datastream/resource_datastream_stream_test.go index 92ddb2b6e77..ec9eea0e99c 100644 --- a/google/services/datastream/resource_datastream_stream_test.go +++ b/google/services/datastream/resource_datastream_stream_test.go @@ -35,7 +35,7 @@ func TestAccDatastreamStream_update(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state", "labels", "terraform_labels"}, }, { Config: testAccDatastreamStream_datastreamStreamBasicUpdate(context, "RUNNING", true), @@ -45,7 +45,7 @@ func TestAccDatastreamStream_update(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state", "labels", "terraform_labels"}, }, { Config: testAccDatastreamStream_datastreamStreamBasicUpdate(context, "PAUSED", true), @@ -55,7 +55,7 @@ func TestAccDatastreamStream_update(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state", "labels", "terraform_labels"}, }, { Config: testAccDatastreamStream_datastreamStreamBasicUpdate(context, "RUNNING", true), @@ -65,7 +65,7 @@ func TestAccDatastreamStream_update(t *testing.T) { ResourceName: "google_datastream_stream.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state"}, + ImportStateVerifyIgnore: []string{"stream_id", "location", "desired_state", "labels", "terraform_labels"}, }, { // Disable prevent_destroy diff --git a/google/services/deploymentmanager/resource_deployment_manager_deployment.go b/google/services/deploymentmanager/resource_deployment_manager_deployment.go index f93c964ae97..8f2b6921b40 100644 --- a/google/services/deploymentmanager/resource_deployment_manager_deployment.go +++ b/google/services/deploymentmanager/resource_deployment_manager_deployment.go @@ -76,6 +76,7 @@ func ResourceDeploymentManagerDeployment() *schema.Resource { CustomizeDiff: customdiff.All( customDiffDeploymentManagerDeployment, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -591,9 +592,9 @@ func resourceDeploymentManagerDeploymentDelete(d *schema.ResourceData, meta inte func resourceDeploymentManagerDeploymentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/deployments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/deployments/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dialogflow/resource_dialogflow_agent.go b/google/services/dialogflow/resource_dialogflow_agent.go index 4b672e4ccfc..582abcf71d1 100644 --- a/google/services/dialogflow/resource_dialogflow_agent.go +++ b/google/services/dialogflow/resource_dialogflow_agent.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -48,6 +49,10 @@ func ResourceDialogflowAgent() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "default_language_code": { Type: schema.TypeString, @@ -508,7 +513,7 @@ func resourceDialogflowAgentDelete(d *schema.ResourceData, meta interface{}) err func resourceDialogflowAgentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dialogflow/resource_dialogflow_entity_type.go b/google/services/dialogflow/resource_dialogflow_entity_type.go index 4926a45cda2..5cbf5477ce3 100644 --- a/google/services/dialogflow/resource_dialogflow_entity_type.go +++ b/google/services/dialogflow/resource_dialogflow_entity_type.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceDialogflowEntityType() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/dialogflow/resource_dialogflow_fulfillment.go b/google/services/dialogflow/resource_dialogflow_fulfillment.go index 6ab02aa5088..a5a9716f3c3 100644 --- a/google/services/dialogflow/resource_dialogflow_fulfillment.go +++ b/google/services/dialogflow/resource_dialogflow_fulfillment.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceDialogflowFulfillment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/dialogflow/resource_dialogflow_intent.go b/google/services/dialogflow/resource_dialogflow_intent.go index 47d2b032449..0bc33444e2c 100644 --- a/google/services/dialogflow/resource_dialogflow_intent.go +++ b/google/services/dialogflow/resource_dialogflow_intent.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceDialogflowIntent() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_agent.go b/google/services/dialogflowcx/resource_dialogflow_cx_agent.go index 2adc40e7eae..cb135836888 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_agent.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_agent.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -48,6 +49,10 @@ func ResourceDialogflowCXAgent() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "default_language_code": { Type: schema.TypeString, @@ -536,9 +541,9 @@ func resourceDialogflowCXAgentDelete(d *schema.ResourceData, meta interface{}) e func resourceDialogflowCXAgentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/agents/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/agents/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_environment.go b/google/services/dialogflowcx/resource_dialogflow_cx_environment.go index 820df76ab11..5afb94aebab 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_environment.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_environment.go @@ -10,6 +10,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -34,6 +35,10 @@ func ResourceDialogflowCXEnvironment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_intent.go b/google/services/dialogflowcx/resource_dialogflow_cx_intent.go index fabf3f26825..a48a97d3c3d 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_intent.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_intent.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -49,6 +50,10 @@ func ResourceDialogflowCXIntent() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -73,7 +78,11 @@ Adding training phrases to fallback intent is useful in the case of requests tha Optional: true, Description: `The key/value metadata to label an intent. Labels can contain lowercase letters, digits and the symbols '-' and '_'. International characters are allowed, including letters from unicase alphabets. Keys must start with a letter. Keys and values can be no longer than 63 characters and no more than 128 bytes. Prefix "sys-" is reserved for Dialogflow defined labels. Currently allowed Dialogflow defined labels include: * sys-head * sys-contextual The above labels do not require value. "sys-head" means the intent is a head intent. "sys.contextual" means the intent is a contextual intent. -An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "language_code": { @@ -173,12 +182,25 @@ Part.text is set to a part of the phrase that you want to annotate, and the para }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, Description: `The unique identifier of the intent. Format: projects//locations//agents//intents/.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, UseJSONNumber: true, } @@ -222,18 +244,18 @@ func resourceDialogflowCXIntentCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("is_fallback"); !tpgresource.IsEmptyValue(reflect.ValueOf(isFallbackProp)) && (ok || !reflect.DeepEqual(v, isFallbackProp)) { obj["isFallback"] = isFallbackProp } - labelsProp, err := expandDialogflowCXIntentLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandDialogflowCXIntentDescription(d.Get("description"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } + labelsProp, err := expandDialogflowCXIntentEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } languageCodeProp, err := expandDialogflowCXIntentLanguageCode(d.Get("language_code"), d, config) if err != nil { return err @@ -364,6 +386,12 @@ func resourceDialogflowCXIntentRead(d *schema.ResourceData, meta interface{}) er if err := d.Set("description", flattenDialogflowCXIntentDescription(res["description"], d, config)); err != nil { return fmt.Errorf("Error reading Intent: %s", err) } + if err := d.Set("terraform_labels", flattenDialogflowCXIntentTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Intent: %s", err) + } + if err := d.Set("effective_labels", flattenDialogflowCXIntentEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Intent: %s", err) + } if err := d.Set("language_code", flattenDialogflowCXIntentLanguageCode(res["languageCode"], d, config)); err != nil { return fmt.Errorf("Error reading Intent: %s", err) } @@ -411,18 +439,18 @@ func resourceDialogflowCXIntentUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("is_fallback"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, isFallbackProp)) { obj["isFallback"] = isFallbackProp } - labelsProp, err := expandDialogflowCXIntentLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandDialogflowCXIntentDescription(d.Get("description"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } + labelsProp, err := expandDialogflowCXIntentEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DialogflowCXBasePath}}{{parent}}/intents/{{name}}") if err != nil { @@ -452,13 +480,13 @@ func resourceDialogflowCXIntentUpdate(d *schema.ResourceData, meta interface{}) updateMask = append(updateMask, "isFallback") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("description") { updateMask = append(updateMask, "description") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -718,13 +746,43 @@ func flattenDialogflowCXIntentIsFallback(v interface{}, d *schema.ResourceData, } func flattenDialogflowCXIntentLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDialogflowCXIntentDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } +func flattenDialogflowCXIntentTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenDialogflowCXIntentEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func flattenDialogflowCXIntentLanguageCode(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -881,7 +939,11 @@ func expandDialogflowCXIntentIsFallback(v interface{}, d tpgresource.TerraformRe return v, nil } -func expandDialogflowCXIntentLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandDialogflowCXIntentDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandDialogflowCXIntentEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -892,10 +954,6 @@ func expandDialogflowCXIntentLabels(v interface{}, d tpgresource.TerraformResour return m, nil } -func expandDialogflowCXIntentDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - func expandDialogflowCXIntentLanguageCode(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_intent_generated_test.go b/google/services/dialogflowcx/resource_dialogflow_cx_intent_generated_test.go index b8d27ddc323..71c0686c58b 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_intent_generated_test.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_intent_generated_test.go @@ -49,7 +49,7 @@ func TestAccDialogflowCXIntent_dialogflowcxIntentFullExample(t *testing.T) { ResourceName: "google_dialogflow_cx_intent.basic_intent", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"parent"}, + ImportStateVerifyIgnore: []string{"parent", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_security_settings.go b/google/services/dialogflowcx/resource_dialogflow_cx_security_settings.go index 9a01c20a586..818b6dcc42e 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_security_settings.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_security_settings.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceDialogflowCXSecuritySettings() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -579,9 +584,9 @@ func resourceDialogflowCXSecuritySettingsDelete(d *schema.ResourceData, meta int func resourceDialogflowCXSecuritySettingsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/securitySettings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/securitySettings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_test_case.go b/google/services/dialogflowcx/resource_dialogflow_cx_test_case.go index c3cf37732a7..e75f15f41c0 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_test_case.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_test_case.go @@ -911,7 +911,7 @@ func resourceDialogflowCXTestCaseDelete(d *schema.ResourceData, meta interface{} func resourceDialogflowCXTestCaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/testCases/(?P[^/]+)", + "^(?P.+)/testCases/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dialogflowcx/resource_dialogflow_cx_version.go b/google/services/dialogflowcx/resource_dialogflow_cx_version.go index 57fdc6ac7b1..7ebf9e29845 100644 --- a/google/services/dialogflowcx/resource_dialogflow_cx_version.go +++ b/google/services/dialogflowcx/resource_dialogflow_cx_version.go @@ -10,6 +10,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -35,6 +36,10 @@ func ResourceDialogflowCXVersion() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/dialogflowcx/resource_dialogflowcx_intent_test.go b/google/services/dialogflowcx/resource_dialogflowcx_intent_test.go index ff031e5d723..a29d7a87887 100644 --- a/google/services/dialogflowcx/resource_dialogflowcx_intent_test.go +++ b/google/services/dialogflowcx/resource_dialogflowcx_intent_test.go @@ -27,17 +27,19 @@ func TestAccDialogflowCXIntent_update(t *testing.T) { Config: testAccDialogflowCXIntent_basic(context), }, { - ResourceName: "google_dialogflow_cx_intent.my_intent", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dialogflow_cx_intent.my_intent", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDialogflowCXIntent_full(context), }, { - ResourceName: "google_dialogflow_cx_intent.my_intent", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dialogflow_cx_intent.my_intent", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/dns/data_source_dns_keys.go b/google/services/dns/data_source_dns_keys.go index a48b30f7c99..41849645004 100644 --- a/google/services/dns/data_source_dns_keys.go +++ b/google/services/dns/data_source_dns_keys.go @@ -18,7 +18,6 @@ import ( "github.com/hashicorp/terraform-provider-google/google/fwmodels" "github.com/hashicorp/terraform-provider-google/google/fwresource" "github.com/hashicorp/terraform-provider-google/google/fwtransport" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" ) // Ensure the implementation satisfies the expected interfaces @@ -181,9 +180,7 @@ func (d *GoogleDnsKeysDataSource) Read(ctx context.Context, req datasource.ReadR clientResp, err := d.client.DnsKeys.List(data.Project.ValueString(), data.ManagedZone.ValueString()).Do() if err != nil { - if !transport_tpg.IsGoogleApiErrorWithCode(err, 404) { - resp.Diagnostics.AddError(fmt.Sprintf("Error when reading or editing dataSourceDnsKeys"), err.Error()) - } + resp.Diagnostics.AddError(fmt.Sprintf("Error when reading or editing dataSourceDnsKeys"), err.Error()) // Save data into Terraform state resp.Diagnostics.Append(resp.State.Set(ctx, &data)...) return diff --git a/google/services/dns/resource_dns_managed_zone.go b/google/services/dns/resource_dns_managed_zone.go index 93ee306228b..14fdc72c2a7 100644 --- a/google/services/dns/resource_dns_managed_zone.go +++ b/google/services/dns/resource_dns_managed_zone.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "google.golang.org/api/dns/v1" @@ -51,6 +52,11 @@ func ResourceDNSManagedZone() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dns_name": { Type: schema.TypeString, @@ -193,10 +199,14 @@ one target is given.`, }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to this ManagedZone.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this ManagedZone. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "peering_config": { Type: schema.TypeList, @@ -293,6 +303,12 @@ while private zones are visible only to Virtual Private Cloud resources. Default Description: `The time that this resource was created on the server. This is in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "managed_zone_id": { Type: schema.TypeInt, Computed: true, @@ -307,6 +323,13 @@ defined by the server`, Type: schema.TypeString, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "force_destroy": { Type: schema.TypeBool, Optional: true, @@ -391,12 +414,6 @@ func resourceDNSManagedZoneCreate(d *schema.ResourceData, meta interface{}) erro } else if v, ok := d.GetOkExists("name"); !tpgresource.IsEmptyValue(reflect.ValueOf(nameProp)) && (ok || !reflect.DeepEqual(v, nameProp)) { obj["name"] = nameProp } - labelsProp, err := expandDNSManagedZoneLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } visibilityProp, err := expandDNSManagedZoneVisibility(d.Get("visibility"), d, config) if err != nil { return err @@ -427,6 +444,12 @@ func resourceDNSManagedZoneCreate(d *schema.ResourceData, meta interface{}) erro } else if v, ok := d.GetOkExists("cloud_logging_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(cloudLoggingConfigProp)) && (ok || !reflect.DeepEqual(v, cloudLoggingConfigProp)) { obj["cloudLoggingConfig"] = cloudLoggingConfigProp } + labelsProp, err := expandDNSManagedZoneEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{DNSBasePath}}projects/{{project}}/managedZones") if err != nil { @@ -557,6 +580,12 @@ func resourceDNSManagedZoneRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("cloud_logging_config", flattenDNSManagedZoneCloudLoggingConfig(res["cloudLoggingConfig"], d, config)); err != nil { return fmt.Errorf("Error reading ManagedZone: %s", err) } + if err := d.Set("terraform_labels", flattenDNSManagedZoneTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ManagedZone: %s", err) + } + if err := d.Set("effective_labels", flattenDNSManagedZoneEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ManagedZone: %s", err) + } return nil } @@ -601,12 +630,6 @@ func resourceDNSManagedZoneUpdate(d *schema.ResourceData, meta interface{}) erro } else if v, ok := d.GetOkExists("name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, nameProp)) { obj["name"] = nameProp } - labelsProp, err := expandDNSManagedZoneLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } visibilityProp, err := expandDNSManagedZoneVisibility(d.Get("visibility"), d, config) if err != nil { return err @@ -637,6 +660,12 @@ func resourceDNSManagedZoneUpdate(d *schema.ResourceData, meta interface{}) erro } else if v, ok := d.GetOkExists("cloud_logging_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, cloudLoggingConfigProp)) { obj["cloudLoggingConfig"] = cloudLoggingConfigProp } + labelsProp, err := expandDNSManagedZoneEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceDNSManagedZoneUpdateEncoder(d, meta, obj) if err != nil { @@ -792,9 +821,9 @@ func resourceDNSManagedZoneDelete(d *schema.ResourceData, meta interface{}) erro func resourceDNSManagedZoneImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/managedZones/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/managedZones/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -933,7 +962,18 @@ func flattenDNSManagedZoneCreationTime(v interface{}, d *schema.ResourceData, co } func flattenDNSManagedZoneLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenDNSManagedZoneVisibility(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1109,6 +1149,25 @@ func flattenDNSManagedZoneCloudLoggingConfigEnableLogging(v interface{}, d *sche return v } +func flattenDNSManagedZoneTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenDNSManagedZoneEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandDNSManagedZoneDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1232,17 +1291,6 @@ func expandDNSManagedZoneName(v interface{}, d tpgresource.TerraformResourceData return v, nil } -func expandDNSManagedZoneLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandDNSManagedZoneVisibility(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1470,6 +1518,17 @@ func expandDNSManagedZoneCloudLoggingConfigEnableLogging(v interface{}, d tpgres return v, nil } +func expandDNSManagedZoneEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceDNSManagedZoneUpdateEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { // The upstream update method (https://cloud.google.com/dns/docs/reference/v1/managedZones/update) // requires the full ManagedZones object, therefore, we need to keep some input only values in the struct diff --git a/google/services/dns/resource_dns_managed_zone_generated_test.go b/google/services/dns/resource_dns_managed_zone_generated_test.go index f4d1a978072..56c866e466d 100644 --- a/google/services/dns/resource_dns_managed_zone_generated_test.go +++ b/google/services/dns/resource_dns_managed_zone_generated_test.go @@ -50,7 +50,7 @@ func TestAccDNSManagedZone_dnsManagedZoneQuickstartExample(t *testing.T) { ResourceName: "google_dns_managed_zone.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"force_destroy"}, + ImportStateVerifyIgnore: []string{"force_destroy", "labels", "terraform_labels"}, }, }, }) @@ -131,9 +131,10 @@ func TestAccDNSManagedZone_dnsRecordSetBasicExample(t *testing.T) { Config: testAccDNSManagedZone_dnsRecordSetBasicExample(context), }, { - ResourceName: "google_dns_managed_zone.parent-zone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.parent-zone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -178,9 +179,10 @@ func TestAccDNSManagedZone_dnsManagedZoneBasicExample(t *testing.T) { Config: testAccDNSManagedZone_dnsManagedZoneBasicExample(context), }, { - ResourceName: "google_dns_managed_zone.example-zone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.example-zone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -219,9 +221,10 @@ func TestAccDNSManagedZone_dnsManagedZonePrivateExample(t *testing.T) { Config: testAccDNSManagedZone_dnsManagedZonePrivateExample(context), }, { - ResourceName: "google_dns_managed_zone.private-zone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.private-zone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -279,9 +282,10 @@ func TestAccDNSManagedZone_dnsManagedZonePrivateMultiprojectExample(t *testing.T Config: testAccDNSManagedZone_dnsManagedZonePrivateMultiprojectExample(context), }, { - ResourceName: "google_dns_managed_zone.private-zone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.private-zone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -414,7 +418,8 @@ func TestAccDNSManagedZone_dnsManagedZonePrivateGkeExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -426,9 +431,10 @@ func TestAccDNSManagedZone_dnsManagedZonePrivateGkeExample(t *testing.T) { Config: testAccDNSManagedZone_dnsManagedZonePrivateGkeExample(context), }, { - ResourceName: "google_dns_managed_zone.private-zone-gke", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.private-zone-gke", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -502,6 +508,7 @@ resource "google_container_cluster" "cluster-1" { cluster_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[1].range_name } + deletion_protection = "%{deletion_protection}" } `, context) } @@ -522,9 +529,10 @@ func TestAccDNSManagedZone_dnsManagedZonePrivatePeeringExample(t *testing.T) { Config: testAccDNSManagedZone_dnsManagedZonePrivatePeeringExample(context), }, { - ResourceName: "google_dns_managed_zone.peering-zone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.peering-zone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -580,9 +588,10 @@ func TestAccDNSManagedZone_dnsManagedZoneCloudLoggingExample(t *testing.T) { Config: testAccDNSManagedZone_dnsManagedZoneCloudLoggingExample(context), }, { - ResourceName: "google_dns_managed_zone.cloud-logging-enabled-zone", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.cloud-logging-enabled-zone", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/dns/resource_dns_managed_zone_test.go b/google/services/dns/resource_dns_managed_zone_test.go index 838ae32a35a..4d2c6995818 100644 --- a/google/services/dns/resource_dns_managed_zone_test.go +++ b/google/services/dns/resource_dns_managed_zone_test.go @@ -30,17 +30,19 @@ func TestAccDNSManagedZone_update(t *testing.T) { Config: testAccDnsManagedZone_basic(zoneSuffix, "description1", map[string]string{"foo": "bar", "ping": "pong"}), }, { - ResourceName: "google_dns_managed_zone.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDnsManagedZone_basic(zoneSuffix, "description2", map[string]string{"foo": "bar"}), }, { - ResourceName: "google_dns_managed_zone.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -172,25 +174,28 @@ func TestAccDNSManagedZone_cloudLoggingConfigUpdate(t *testing.T) { Config: testAccDnsManagedZone_cloudLoggingConfig_basic(zoneSuffix), }, { - ResourceName: "google_dns_managed_zone.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDnsManagedZone_cloudLoggingConfig_update(zoneSuffix, true), }, { - ResourceName: "google_dns_managed_zone.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccDnsManagedZone_cloudLoggingConfig_update(zoneSuffix, false), }, { - ResourceName: "google_dns_managed_zone.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_dns_managed_zone.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -415,6 +420,7 @@ resource "google_container_cluster" "cluster-1" { name = "tf-test-cluster-1-%s" location = "us-central1-c" initial_node_count = 1 + deletion_protection = false networking_mode = "VPC_NATIVE" default_snat_status { diff --git a/google/services/dns/resource_dns_policy.go b/google/services/dns/resource_dns_policy.go index 9f0435ffc0c..8a96b320c4d 100644 --- a/google/services/dns/resource_dns_policy.go +++ b/google/services/dns/resource_dns_policy.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -49,6 +50,10 @@ func ResourceDNSPolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -471,9 +476,9 @@ func resourceDNSPolicyDelete(d *schema.ResourceData, meta interface{}) error { func resourceDNSPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/policies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/policies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dns/resource_dns_record_set.go b/google/services/dns/resource_dns_record_set.go index a46fc56bf14..f476c6512fe 100644 --- a/google/services/dns/resource_dns_record_set.go +++ b/google/services/dns/resource_dns_record_set.go @@ -10,6 +10,7 @@ import ( "net" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -83,6 +84,10 @@ func ResourceDnsRecordSet() *schema.Resource { State: resourceDnsRecordSetImportState, }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "managed_zone": { Type: schema.TypeString, diff --git a/google/services/dns/resource_dns_response_policy.go b/google/services/dns/resource_dns_response_policy.go index dd3047c7c66..df37601d504 100644 --- a/google/services/dns/resource_dns_response_policy.go +++ b/google/services/dns/resource_dns_response_policy.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceDNSResponsePolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "response_policy_name": { Type: schema.TypeString, @@ -398,9 +403,9 @@ func resourceDNSResponsePolicyDelete(d *schema.ResourceData, meta interface{}) e func resourceDNSResponsePolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/responsePolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/responsePolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/dns/resource_dns_response_policy_generated_test.go b/google/services/dns/resource_dns_response_policy_generated_test.go index 195f0748a96..1c97611ca93 100644 --- a/google/services/dns/resource_dns_response_policy_generated_test.go +++ b/google/services/dns/resource_dns_response_policy_generated_test.go @@ -34,7 +34,8 @@ func TestAccDNSResponsePolicy_dnsResponsePolicyBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -110,6 +111,7 @@ resource "google_container_cluster" "cluster-1" { cluster_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[1].range_name } + deletion_protection = "%{deletion_protection}" } resource "google_dns_response_policy" "example-response-policy" { diff --git a/google/services/dns/resource_dns_response_policy_rule.go b/google/services/dns/resource_dns_response_policy_rule.go index b9e1ff4dafb..6135764572c 100644 --- a/google/services/dns/resource_dns_response_policy_rule.go +++ b/google/services/dns/resource_dns_response_policy_rule.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceDNSResponsePolicyRule() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dns_name": { Type: schema.TypeString, @@ -354,9 +359,9 @@ func resourceDNSResponsePolicyRuleDelete(d *schema.ResourceData, meta interface{ func resourceDNSResponsePolicyRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/responsePolicies/(?P[^/]+)/rules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/responsePolicies/(?P[^/]+)/rules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/documentai/resource_document_ai_processor.go b/google/services/documentai/resource_document_ai_processor.go index 85a2dd48f90..1f2b7fe3143 100644 --- a/google/services/documentai/resource_document_ai_processor.go +++ b/google/services/documentai/resource_document_ai_processor.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceDocumentAIProcessor() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -263,9 +268,9 @@ func resourceDocumentAIProcessorDelete(d *schema.ResourceData, meta interface{}) func resourceDocumentAIProcessorImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/processors/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/processors/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/documentai/resource_document_ai_processor_default_version.go b/google/services/documentai/resource_document_ai_processor_default_version.go index 8047a1942bc..8471de1eb20 100644 --- a/google/services/documentai/resource_document_ai_processor_default_version.go +++ b/google/services/documentai/resource_document_ai_processor_default_version.go @@ -183,7 +183,7 @@ func resourceDocumentAIProcessorDefaultVersionDelete(d *schema.ResourceData, met func resourceDocumentAIProcessorDefaultVersionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)", + "^(?P.+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/edgenetwork/resource_edgenetwork_network.go b/google/services/edgenetwork/resource_edgenetwork_network.go index 1afa781f88b..edbf2d97149 100644 --- a/google/services/edgenetwork/resource_edgenetwork_network.go +++ b/google/services/edgenetwork/resource_edgenetwork_network.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceEdgenetworkNetwork() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -313,11 +318,11 @@ func resourceEdgenetworkNetworkDelete(d *schema.ResourceData, meta interface{}) func resourceEdgenetworkNetworkImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/zones/(?P[^/]+)/networks/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/zones/(?P[^/]+)/networks/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/edgenetwork/resource_edgenetwork_subnet.go b/google/services/edgenetwork/resource_edgenetwork_subnet.go index 793de68a57c..f5238ff5307 100644 --- a/google/services/edgenetwork/resource_edgenetwork_subnet.go +++ b/google/services/edgenetwork/resource_edgenetwork_subnet.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceEdgenetworkSubnet() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -374,11 +379,11 @@ func resourceEdgenetworkSubnetDelete(d *schema.ResourceData, meta interface{}) e func resourceEdgenetworkSubnetImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/zones/(?P[^/]+)/subnets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/zones/(?P[^/]+)/subnets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/essentialcontacts/resource_essential_contacts_contact.go b/google/services/essentialcontacts/resource_essential_contacts_contact.go index 1c124a0f2cd..e25ef51516b 100644 --- a/google/services/essentialcontacts/resource_essential_contacts_contact.go +++ b/google/services/essentialcontacts/resource_essential_contacts_contact.go @@ -308,7 +308,7 @@ func resourceEssentialContactsContactDelete(d *schema.ResourceData, meta interfa func resourceEssentialContactsContactImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)", + "^(?P.+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/eventarc/resource_eventarc_channel.go b/google/services/eventarc/resource_eventarc_channel.go index 25e0bd58bf5..732316f6594 100644 --- a/google/services/eventarc/resource_eventarc_channel.go +++ b/google/services/eventarc/resource_eventarc_channel.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,9 @@ func ResourceEventarcChannel() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "location": { diff --git a/google/services/eventarc/resource_eventarc_google_channel_config.go b/google/services/eventarc/resource_eventarc_google_channel_config.go index 95c98697198..0670308f037 100644 --- a/google/services/eventarc/resource_eventarc_google_channel_config.go +++ b/google/services/eventarc/resource_eventarc_google_channel_config.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,9 @@ func ResourceEventarcGoogleChannelConfig() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "location": { diff --git a/google/services/eventarc/resource_eventarc_trigger.go b/google/services/eventarc/resource_eventarc_trigger.go index 6b5f79692f9..9b17b08fa35 100644 --- a/google/services/eventarc/resource_eventarc_trigger.go +++ b/google/services/eventarc/resource_eventarc_trigger.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceEventarcTrigger() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "destination": { @@ -90,6 +95,12 @@ func ResourceEventarcTrigger() *schema.Resource { Description: "Optional. The name of the channel associated with the trigger in `projects/{project}/locations/{location}/channels/{channel}` format. You must provide a channel to receive events from Eventarc SaaS partners.", }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", + }, + "event_data_content_type": { Type: schema.TypeString, Computed: true, @@ -97,13 +108,6 @@ func ResourceEventarcTrigger() *schema.Resource { Description: "Optional. EventDataContentType specifies the type of payload in MIME format that is expected from the CloudEvent data field. This is set to `application/json` if the value is not defined.", }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: "Optional. User labels attached to the triggers that can be used to group resources.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "project": { Type: schema.TypeString, Computed: true, @@ -149,6 +153,19 @@ func ResourceEventarcTrigger() *schema.Resource { Description: "Output only. This checksum is computed by the server based on the value of other fields, and may be sent only on create requests to ensure the client has an up-to-date value before proceeding.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. User labels attached to the triggers that can be used to group resources.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "uid": { Type: schema.TypeString, Computed: true, @@ -335,8 +352,8 @@ func resourceEventarcTriggerCreate(d *schema.ResourceData, meta interface{}) err MatchingCriteria: expandEventarcTriggerMatchingCriteriaArray(d.Get("matching_criteria")), Name: dcl.String(d.Get("name").(string)), Channel: dcl.String(d.Get("channel").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), EventDataContentType: dcl.StringOrNil(d.Get("event_data_content_type").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), ServiceAccount: dcl.String(d.Get("service_account").(string)), Transport: expandEventarcTriggerTransport(d.Get("transport")), @@ -392,8 +409,8 @@ func resourceEventarcTriggerRead(d *schema.ResourceData, meta interface{}) error MatchingCriteria: expandEventarcTriggerMatchingCriteriaArray(d.Get("matching_criteria")), Name: dcl.String(d.Get("name").(string)), Channel: dcl.String(d.Get("channel").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), EventDataContentType: dcl.StringOrNil(d.Get("event_data_content_type").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), ServiceAccount: dcl.String(d.Get("service_account").(string)), Transport: expandEventarcTriggerTransport(d.Get("transport")), @@ -436,12 +453,12 @@ func resourceEventarcTriggerRead(d *schema.ResourceData, meta interface{}) error if err = d.Set("channel", res.Channel); err != nil { return fmt.Errorf("error setting channel in state: %s", err) } + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) + } if err = d.Set("event_data_content_type", res.EventDataContentType); err != nil { return fmt.Errorf("error setting event_data_content_type in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) - } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } @@ -460,6 +477,12 @@ func resourceEventarcTriggerRead(d *schema.ResourceData, meta interface{}) error if err = d.Set("etag", res.Etag); err != nil { return fmt.Errorf("error setting etag in state: %s", err) } + if err = d.Set("labels", flattenEventarcTriggerLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } + if err = d.Set("terraform_labels", flattenEventarcTriggerTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("uid", res.Uid); err != nil { return fmt.Errorf("error setting uid in state: %s", err) } @@ -482,8 +505,8 @@ func resourceEventarcTriggerUpdate(d *schema.ResourceData, meta interface{}) err MatchingCriteria: expandEventarcTriggerMatchingCriteriaArray(d.Get("matching_criteria")), Name: dcl.String(d.Get("name").(string)), Channel: dcl.String(d.Get("channel").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), EventDataContentType: dcl.StringOrNil(d.Get("event_data_content_type").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), ServiceAccount: dcl.String(d.Get("service_account").(string)), Transport: expandEventarcTriggerTransport(d.Get("transport")), @@ -534,8 +557,8 @@ func resourceEventarcTriggerDelete(d *schema.ResourceData, meta interface{}) err MatchingCriteria: expandEventarcTriggerMatchingCriteriaArray(d.Get("matching_criteria")), Name: dcl.String(d.Get("name").(string)), Channel: dcl.String(d.Get("channel").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), EventDataContentType: dcl.StringOrNil(d.Get("event_data_content_type").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), ServiceAccount: dcl.String(d.Get("service_account").(string)), Transport: expandEventarcTriggerTransport(d.Get("transport")), @@ -796,3 +819,33 @@ func flattenEventarcTriggerTransportPubsub(obj *eventarc.TriggerTransportPubsub) return []interface{}{transformed} } + +func flattenEventarcTriggerLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenEventarcTriggerTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/eventarc/resource_eventarc_trigger_generated_test.go b/google/services/eventarc/resource_eventarc_trigger_generated_test.go index 7c42708051e..39157e18cdd 100644 --- a/google/services/eventarc/resource_eventarc_trigger_generated_test.go +++ b/google/services/eventarc/resource_eventarc_trigger_generated_test.go @@ -51,25 +51,28 @@ func TestAccEventarcTrigger_BasicHandWritten(t *testing.T) { Config: testAccEventarcTrigger_BasicHandWritten(context), }, { - ResourceName: "google_eventarc_trigger.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_eventarc_trigger.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccEventarcTrigger_BasicHandWrittenUpdate0(context), }, { - ResourceName: "google_eventarc_trigger.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_eventarc_trigger.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccEventarcTrigger_BasicHandWrittenUpdate1(context), }, { - ResourceName: "google_eventarc_trigger.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_eventarc_trigger.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/filestore/resource_filestore_backup.go b/google/services/filestore/resource_filestore_backup.go index 85185d73314..ad84ca506ad 100644 --- a/google/services/filestore/resource_filestore_backup.go +++ b/google/services/filestore/resource_filestore_backup.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceFilestoreBackup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -84,10 +90,14 @@ character, which cannot be a dash.`, Description: `A description of the backup with 2048 characters or less. Requests with longer descriptions will be rejected.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user-provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user-provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "capacity_gb": { Type: schema.TypeString, @@ -104,6 +114,12 @@ character, which cannot be a dash.`, Computed: true, Description: `Amount of bytes that will be downloaded if the backup is restored.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "kms_key_name": { Type: schema.TypeString, Computed: true, @@ -124,6 +140,13 @@ character, which cannot be a dash.`, Computed: true, Description: `The size of the storage used by the backup. As backups share storage, this number is expected to change with backup creation/deletion.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -149,12 +172,6 @@ func resourceFilestoreBackupCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandFilestoreBackupLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } sourceInstanceProp, err := expandFilestoreBackupSourceInstance(d.Get("source_instance"), d, config) if err != nil { return err @@ -167,6 +184,12 @@ func resourceFilestoreBackupCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("source_file_share"); !tpgresource.IsEmptyValue(reflect.ValueOf(sourceFileShareProp)) && (ok || !reflect.DeepEqual(v, sourceFileShareProp)) { obj["sourceFileShare"] = sourceFileShareProp } + labelsProp, err := expandFilestoreBackupEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } lockName, err := tpgresource.ReplaceVars(d, config, "filestore/{{project}}") if err != nil { @@ -314,6 +337,12 @@ func resourceFilestoreBackupRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("kms_key_name", flattenFilestoreBackupKmsKeyName(res["kmsKeyName"], d, config)); err != nil { return fmt.Errorf("Error reading Backup: %s", err) } + if err := d.Set("terraform_labels", flattenFilestoreBackupTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Backup: %s", err) + } + if err := d.Set("effective_labels", flattenFilestoreBackupEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Backup: %s", err) + } return nil } @@ -340,18 +369,18 @@ func resourceFilestoreBackupUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandFilestoreBackupLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } sourceInstanceProp, err := expandFilestoreBackupSourceInstance(d.Get("source_instance"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("source_instance"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, sourceInstanceProp)) { obj["sourceInstance"] = sourceInstanceProp } + labelsProp, err := expandFilestoreBackupEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } lockName, err := tpgresource.ReplaceVars(d, config, "filestore/{{project}}") if err != nil { @@ -372,13 +401,13 @@ func resourceFilestoreBackupUpdate(d *schema.ResourceData, meta interface{}) err updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("source_instance") { updateMask = append(updateMask, "sourceInstance") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -483,9 +512,9 @@ func resourceFilestoreBackupDelete(d *schema.ResourceData, meta interface{}) err func resourceFilestoreBackupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/backups/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/backups/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -513,7 +542,18 @@ func flattenFilestoreBackupCreateTime(v interface{}, d *schema.ResourceData, con } func flattenFilestoreBackupLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenFilestoreBackupCapacityGb(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -544,11 +584,38 @@ func flattenFilestoreBackupKmsKeyName(v interface{}, d *schema.ResourceData, con return v } +func flattenFilestoreBackupTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenFilestoreBackupEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandFilestoreBackupDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandFilestoreBackupLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandFilestoreBackupSourceInstance(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandFilestoreBackupSourceFileShare(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandFilestoreBackupEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -558,11 +625,3 @@ func expandFilestoreBackupLabels(v interface{}, d tpgresource.TerraformResourceD } return m, nil } - -func expandFilestoreBackupSourceInstance(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandFilestoreBackupSourceFileShare(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/filestore/resource_filestore_backup_generated_test.go b/google/services/filestore/resource_filestore_backup_generated_test.go index d664f44491d..a9087789b3b 100644 --- a/google/services/filestore/resource_filestore_backup_generated_test.go +++ b/google/services/filestore/resource_filestore_backup_generated_test.go @@ -49,7 +49,7 @@ func TestAccFilestoreBackup_filestoreBackupBasicExample(t *testing.T) { ResourceName: "google_filestore_backup.backup", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/filestore/resource_filestore_backup_test.go b/google/services/filestore/resource_filestore_backup_test.go index 92bc2978087..6b2757641d7 100644 --- a/google/services/filestore/resource_filestore_backup_test.go +++ b/google/services/filestore/resource_filestore_backup_test.go @@ -37,7 +37,7 @@ func TestAccFilestoreBackup_update(t *testing.T) { ResourceName: "google_filestore_backup.backup", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"labels", "description", "location"}, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels", "description", "location"}, }, }, }) diff --git a/google/services/filestore/resource_filestore_instance.go b/google/services/filestore/resource_filestore_instance.go index 5d89b97beea..0e964d4c933 100644 --- a/google/services/filestore/resource_filestore_instance.go +++ b/google/services/filestore/resource_filestore_instance.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -59,6 +60,10 @@ func ResourceFilestoreInstance() *schema.Resource { Version: 0, }, }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "file_shares": { @@ -222,10 +227,14 @@ Possible values include: STANDARD, PREMIUM, BASIC_HDD, BASIC_SSD, HIGH_SCALE_SSD Description: `KMS key name used for data encryption.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user-provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user-provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { Type: schema.TypeString, @@ -249,12 +258,25 @@ Possible values include: STANDARD, PREMIUM, BASIC_HDD, BASIC_SSD, HIGH_SCALE_SSD Computed: true, Description: `Creation timestamp in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, Description: `Server-specified ETag for the instance resource to prevent simultaneous updates from overwriting each other.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -286,12 +308,6 @@ func resourceFilestoreInstanceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("tier"); !tpgresource.IsEmptyValue(reflect.ValueOf(tierProp)) && (ok || !reflect.DeepEqual(v, tierProp)) { obj["tier"] = tierProp } - labelsProp, err := expandFilestoreInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } fileSharesProp, err := expandFilestoreInstanceFileShares(d.Get("file_shares"), d, config) if err != nil { return err @@ -310,6 +326,12 @@ func resourceFilestoreInstanceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("kms_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(kmsKeyNameProp)) && (ok || !reflect.DeepEqual(v, kmsKeyNameProp)) { obj["kmsKeyName"] = kmsKeyNameProp } + labelsProp, err := expandFilestoreInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{FilestoreBasePath}}projects/{{project}}/locations/{{location}}/instances?instanceId={{name}}") if err != nil { @@ -458,6 +480,12 @@ func resourceFilestoreInstanceRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("kms_key_name", flattenFilestoreInstanceKmsKeyName(res["kmsKeyName"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenFilestoreInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenFilestoreInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -484,18 +512,18 @@ func resourceFilestoreInstanceUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandFilestoreInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } fileSharesProp, err := expandFilestoreInstanceFileShares(d.Get("file_shares"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("file_shares"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, fileSharesProp)) { obj["fileShares"] = fileSharesProp } + labelsProp, err := expandFilestoreInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{FilestoreBasePath}}projects/{{project}}/locations/{{location}}/instances/{{name}}") if err != nil { @@ -509,13 +537,13 @@ func resourceFilestoreInstanceUpdate(d *schema.ResourceData, meta interface{}) e updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("file_shares") { updateMask = append(updateMask, "fileShares") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -613,9 +641,9 @@ func resourceFilestoreInstanceDelete(d *schema.ResourceData, meta interface{}) e func resourceFilestoreInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -643,7 +671,18 @@ func flattenFilestoreInstanceTier(v interface{}, d *schema.ResourceData, config } func flattenFilestoreInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenFilestoreInstanceFileShares(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -814,6 +853,25 @@ func flattenFilestoreInstanceKmsKeyName(v interface{}, d *schema.ResourceData, c return v } +func flattenFilestoreInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenFilestoreInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandFilestoreInstanceDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -822,17 +880,6 @@ func expandFilestoreInstanceTier(v interface{}, d tpgresource.TerraformResourceD return v, nil } -func expandFilestoreInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandFilestoreInstanceFileShares(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) @@ -1032,6 +1079,17 @@ func expandFilestoreInstanceKmsKeyName(v interface{}, d tpgresource.TerraformRes return v, nil } +func expandFilestoreInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceFilestoreInstanceResourceV0() *schema.Resource { return &schema.Resource{ Schema: map[string]*schema.Schema{ diff --git a/google/services/filestore/resource_filestore_instance_generated_test.go b/google/services/filestore/resource_filestore_instance_generated_test.go index a59c2066325..37f06af97d9 100644 --- a/google/services/filestore/resource_filestore_instance_generated_test.go +++ b/google/services/filestore/resource_filestore_instance_generated_test.go @@ -49,7 +49,7 @@ func TestAccFilestoreInstance_filestoreInstanceBasicExample(t *testing.T) { ResourceName: "google_filestore_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "zone", "location"}, + ImportStateVerifyIgnore: []string{"name", "zone", "location", "labels", "terraform_labels"}, }, }, }) @@ -94,7 +94,7 @@ func TestAccFilestoreInstance_filestoreInstanceFullExample(t *testing.T) { ResourceName: "google_filestore_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "zone", "location"}, + ImportStateVerifyIgnore: []string{"name", "zone", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/filestore/resource_filestore_instance_test.go b/google/services/filestore/resource_filestore_instance_test.go index fdb07f29292..f410cbe519b 100644 --- a/google/services/filestore/resource_filestore_instance_test.go +++ b/google/services/filestore/resource_filestore_instance_test.go @@ -58,7 +58,7 @@ func TestAccFilestoreInstance_update(t *testing.T) { ResourceName: "google_filestore_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"zone", "location"}, + ImportStateVerifyIgnore: []string{"zone", "location", "labels", "terraform_labels"}, }, { Config: testAccFilestoreInstance_update2(name), diff --git a/google/services/filestore/resource_filestore_snapshot.go b/google/services/filestore/resource_filestore_snapshot.go index 0f1d820ea52..357ca3ab6d6 100644 --- a/google/services/filestore/resource_filestore_snapshot.go +++ b/google/services/filestore/resource_filestore_snapshot.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceFilestoreSnapshot() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "instance": { Type: schema.TypeString, @@ -79,16 +85,26 @@ character, which cannot be a dash.`, Description: `A description of the snapshot with 2048 characters or less. Requests with longer descriptions will be rejected.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user-provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user-provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "create_time": { Type: schema.TypeString, Computed: true, Description: `The time when the snapshot was created in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "filesystem_used_bytes": { Type: schema.TypeString, Computed: true, @@ -99,6 +115,13 @@ character, which cannot be a dash.`, Computed: true, Description: `The snapshot state.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -124,10 +147,10 @@ func resourceFilestoreSnapshotCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandFilestoreSnapshotLabels(d.Get("labels"), d, config) + labelsProp, err := expandFilestoreSnapshotEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -259,6 +282,12 @@ func resourceFilestoreSnapshotRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("filesystem_used_bytes", flattenFilestoreSnapshotFilesystemUsedBytes(res["filesystemUsedBytes"], d, config)); err != nil { return fmt.Errorf("Error reading Snapshot: %s", err) } + if err := d.Set("terraform_labels", flattenFilestoreSnapshotTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Snapshot: %s", err) + } + if err := d.Set("effective_labels", flattenFilestoreSnapshotEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Snapshot: %s", err) + } return nil } @@ -285,10 +314,10 @@ func resourceFilestoreSnapshotUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandFilestoreSnapshotLabels(d.Get("labels"), d, config) + labelsProp, err := expandFilestoreSnapshotEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -311,7 +340,7 @@ func resourceFilestoreSnapshotUpdate(d *schema.ResourceData, meta interface{}) e updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -418,9 +447,9 @@ func resourceFilestoreSnapshotDelete(d *schema.ResourceData, meta interface{}) e func resourceFilestoreSnapshotImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)/snapshots/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)/snapshots/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -448,18 +477,48 @@ func flattenFilestoreSnapshotCreateTime(v interface{}, d *schema.ResourceData, c } func flattenFilestoreSnapshotLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenFilestoreSnapshotFilesystemUsedBytes(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } +func flattenFilestoreSnapshotTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenFilestoreSnapshotEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandFilestoreSnapshotDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } -func expandFilestoreSnapshotLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandFilestoreSnapshotEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/filestore/resource_filestore_snapshot_generated_test.go b/google/services/filestore/resource_filestore_snapshot_generated_test.go index 5384e55825c..ebb1cb0fb4c 100644 --- a/google/services/filestore/resource_filestore_snapshot_generated_test.go +++ b/google/services/filestore/resource_filestore_snapshot_generated_test.go @@ -49,7 +49,7 @@ func TestAccFilestoreSnapshot_filestoreSnapshotBasicExample(t *testing.T) { ResourceName: "google_filestore_snapshot.snapshot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "instance"}, + ImportStateVerifyIgnore: []string{"name", "location", "instance", "labels", "terraform_labels"}, }, }, }) @@ -100,7 +100,7 @@ func TestAccFilestoreSnapshot_filestoreSnapshotFullExample(t *testing.T) { ResourceName: "google_filestore_snapshot.snapshot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "instance"}, + ImportStateVerifyIgnore: []string{"name", "location", "instance", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/firebaserules/resource_firebaserules_release.go b/google/services/firebaserules/resource_firebaserules_release.go index b31fa378484..9d974a7e40c 100644 --- a/google/services/firebaserules/resource_firebaserules_release.go +++ b/google/services/firebaserules/resource_firebaserules_release.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -38,7 +39,6 @@ func ResourceFirebaserulesRelease() *schema.Resource { return &schema.Resource{ Create: resourceFirebaserulesReleaseCreate, Read: resourceFirebaserulesReleaseRead, - Update: resourceFirebaserulesReleaseUpdate, Delete: resourceFirebaserulesReleaseDelete, Importer: &schema.ResourceImporter{ @@ -47,9 +47,11 @@ func ResourceFirebaserulesRelease() *schema.Resource { Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "name": { @@ -62,6 +64,7 @@ func ResourceFirebaserulesRelease() *schema.Resource { "ruleset_name": { Type: schema.TypeString, Required: true, + ForceNew: true, DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, Description: "Name of the `Ruleset` referred to by this `Release`. The `Ruleset` must exist for the `Release` to be created.", }, @@ -202,50 +205,6 @@ func resourceFirebaserulesReleaseRead(d *schema.ResourceData, meta interface{}) return nil } -func resourceFirebaserulesReleaseUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - project, err := tpgresource.GetProject(d, config) - if err != nil { - return err - } - - obj := &firebaserules.Release{ - Name: dcl.String(d.Get("name").(string)), - RulesetName: dcl.String(d.Get("ruleset_name").(string)), - Project: dcl.String(project), - } - directive := tpgdclresource.UpdateDirective - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - client := transport_tpg.NewDCLFirebaserulesClient(config, userAgent, billingProject, d.Timeout(schema.TimeoutUpdate)) - if bp, err := tpgresource.ReplaceVars(d, config, client.Config.BasePath); err != nil { - d.SetId("") - return fmt.Errorf("Could not format %q: %w", client.Config.BasePath, err) - } else { - client.Config.BasePath = bp - } - res, err := client.ApplyRelease(context.Background(), obj, directive...) - - if _, ok := err.(dcl.DiffAfterApplyError); ok { - log.Printf("[DEBUG] Diff after apply returned from the DCL: %s", err) - } else if err != nil { - // The resource didn't actually create - d.SetId("") - return fmt.Errorf("Error updating Release: %s", err) - } - - log.Printf("[DEBUG] Finished creating Release %q: %#v", d.Id(), res) - - return resourceFirebaserulesReleaseRead(d, meta) -} func resourceFirebaserulesReleaseDelete(d *schema.ResourceData, meta interface{}) error { config := meta.(*transport_tpg.Config) diff --git a/google/services/firebaserules/resource_firebaserules_release_generated_test.go b/google/services/firebaserules/resource_firebaserules_release_generated_test.go index ada832ddf0d..4d5c937b5c8 100644 --- a/google/services/firebaserules/resource_firebaserules_release_generated_test.go +++ b/google/services/firebaserules/resource_firebaserules_release_generated_test.go @@ -54,14 +54,6 @@ func TestAccFirebaserulesRelease_FirestoreReleaseHandWritten(t *testing.T) { ImportState: true, ImportStateVerify: true, }, - { - Config: testAccFirebaserulesRelease_FirestoreReleaseHandWrittenUpdate0(context), - }, - { - ResourceName: "google_firebaserules_release.primary", - ImportState: true, - ImportStateVerify: true, - }, }, }) } @@ -94,34 +86,6 @@ resource "google_firebaserules_ruleset" "firestore" { `, context) } -func testAccFirebaserulesRelease_FirestoreReleaseHandWrittenUpdate0(context map[string]interface{}) string { - return acctest.Nprintf(` -resource "google_firebaserules_release" "primary" { - name = "cloud.firestore" - ruleset_name = "projects/%{project_name}/rulesets/${google_firebaserules_ruleset.firestore.name}" - project = "%{project_name}" - - lifecycle { - replace_triggered_by = [ - google_firebaserules_ruleset.firestore - ] - } -} - -resource "google_firebaserules_ruleset" "firestore" { - source { - files { - content = "service cloud.firestore {match /databases/{database}/documents { match /{document=**} { allow read, write: if request.auth != null; } } }" - name = "firestore.rules" - } - } - - project = "%{project_name}" -} - -`, context) -} - func testAccCheckFirebaserulesReleaseDestroyProducer(t *testing.T) func(s *terraform.State) error { return func(s *terraform.State) error { for name, rs := range s.RootModule().Resources { diff --git a/google/services/firebaserules/resource_firebaserules_ruleset.go b/google/services/firebaserules/resource_firebaserules_ruleset.go index 8bd5b74807e..2737785e43e 100644 --- a/google/services/firebaserules/resource_firebaserules_ruleset.go +++ b/google/services/firebaserules/resource_firebaserules_ruleset.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -48,6 +49,9 @@ func ResourceFirebaserulesRuleset() *schema.Resource { Create: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "source": { diff --git a/google/services/firestore/resource_firestore_database.go b/google/services/firestore/resource_firestore_database.go index 605e721c782..e5a2db6203f 100644 --- a/google/services/firestore/resource_firestore_database.go +++ b/google/services/firestore/resource_firestore_database.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceFirestoreDatabase() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location_id": { Type: schema.TypeString, @@ -509,9 +514,9 @@ func resourceFirestoreDatabaseDelete(d *schema.ResourceData, meta interface{}) e func resourceFirestoreDatabaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/databases/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/databases/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/firestore/resource_firestore_document.go b/google/services/firestore/resource_firestore_document.go index 00e005621b7..4655dee8242 100644 --- a/google/services/firestore/resource_firestore_document.go +++ b/google/services/firestore/resource_firestore_document.go @@ -25,6 +25,7 @@ import ( "regexp" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -50,6 +51,10 @@ func ResourceFirestoreDocument() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "collection": { Type: schema.TypeString, diff --git a/google/services/firestore/resource_firestore_field.go b/google/services/firestore/resource_firestore_field.go index dc981aeb262..0128a05c1bd 100644 --- a/google/services/firestore/resource_firestore_field.go +++ b/google/services/firestore/resource_firestore_field.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceFirestoreField() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "collection": { Type: schema.TypeString, diff --git a/google/services/firestore/resource_firestore_index.go b/google/services/firestore/resource_firestore_index.go index e2d94a12d0a..67919ecfed3 100644 --- a/google/services/firestore/resource_firestore_index.go +++ b/google/services/firestore/resource_firestore_index.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -83,6 +84,10 @@ func ResourceFirestoreIndex() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "collection": { Type: schema.TypeString, diff --git a/google/services/gameservices/data_source_google_game_services_game_server_deployment_rollout.go b/google/services/gameservices/data_source_google_game_services_game_server_deployment_rollout.go deleted file mode 100644 index 6d5d7f59d5e..00000000000 --- a/google/services/gameservices/data_source_google_game_services_game_server_deployment_rollout.go +++ /dev/null @@ -1,35 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 -package gameservices - -import ( - "fmt" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func DataSourceGameServicesGameServerDeploymentRollout() *schema.Resource { - - dsSchema := tpgresource.DatasourceSchemaFromResourceSchema(ResourceGameServicesGameServerDeploymentRollout().Schema) - tpgresource.AddRequiredFieldsToSchema(dsSchema, "deployment_id") - - return &schema.Resource{ - Read: dataSourceGameServicesGameServerDeploymentRolloutRead, - Schema: dsSchema, - } -} - -func dataSourceGameServicesGameServerDeploymentRolloutRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - - d.SetId(id) - - return resourceGameServicesGameServerDeploymentRolloutRead(d, meta) -} diff --git a/google/services/gameservices/game_services_operation.go b/google/services/gameservices/game_services_operation.go deleted file mode 100644 index 460bcd74e2b..00000000000 --- a/google/services/gameservices/game_services_operation.go +++ /dev/null @@ -1,92 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "encoding/json" - "errors" - "fmt" - "time" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -type GameServicesOperationWaiter struct { - Config *transport_tpg.Config - UserAgent string - Project string - tpgresource.CommonOperationWaiter -} - -func (w *GameServicesOperationWaiter) QueryOp() (interface{}, error) { - if w == nil { - return nil, fmt.Errorf("Cannot query operation, it's unset or nil.") - } - // Returns the proper get. - url := fmt.Sprintf("%s%s", w.Config.GameServicesBasePath, w.CommonOperationWaiter.Op.Name) - - return transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: w.Config, - Method: "GET", - Project: w.Project, - RawURL: url, - UserAgent: w.UserAgent, - }) -} - -func createGameServicesWaiter(config *transport_tpg.Config, op map[string]interface{}, project, activity, userAgent string) (*GameServicesOperationWaiter, error) { - w := &GameServicesOperationWaiter{ - Config: config, - UserAgent: userAgent, - Project: project, - } - if err := w.CommonOperationWaiter.SetOp(op); err != nil { - return nil, err - } - return w, nil -} - -// nolint: deadcode,unused -func GameServicesOperationWaitTimeWithResponse(config *transport_tpg.Config, op map[string]interface{}, response *map[string]interface{}, project, activity, userAgent string, timeout time.Duration) error { - w, err := createGameServicesWaiter(config, op, project, activity, userAgent) - if err != nil { - return err - } - if err := tpgresource.OperationWait(w, activity, timeout, config.PollInterval); err != nil { - return err - } - rawResponse := []byte(w.CommonOperationWaiter.Op.Response) - if len(rawResponse) == 0 { - return errors.New("`resource` not set in operation response") - } - return json.Unmarshal(rawResponse, response) -} - -func GameServicesOperationWaitTime(config *transport_tpg.Config, op map[string]interface{}, project, activity, userAgent string, timeout time.Duration) error { - if val, ok := op["name"]; !ok || val == "" { - // This was a synchronous call - there is no operation to wait for. - return nil - } - w, err := createGameServicesWaiter(config, op, project, activity, userAgent) - if err != nil { - // If w is nil, the op was synchronous. - return err - } - return tpgresource.OperationWait(w, activity, timeout, config.PollInterval) -} diff --git a/google/services/gameservices/resource_game_services_game_server_cluster.go b/google/services/gameservices/resource_game_services_game_server_cluster.go deleted file mode 100644 index 48d95365c58..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_cluster.go +++ /dev/null @@ -1,578 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "fmt" - "log" - "reflect" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func suppressSuffixDiff(_, old, new string, _ *schema.ResourceData) bool { - if strings.HasSuffix(old, new) { - log.Printf("[INFO] suppressing diff as %s is the same as the full path of %s", new, old) - return true - } - - return false -} - -func ResourceGameServicesGameServerCluster() *schema.Resource { - return &schema.Resource{ - Create: resourceGameServicesGameServerClusterCreate, - Read: resourceGameServicesGameServerClusterRead, - Update: resourceGameServicesGameServerClusterUpdate, - Delete: resourceGameServicesGameServerClusterDelete, - - Importer: &schema.ResourceImporter{ - State: resourceGameServicesGameServerClusterImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - Schema: map[string]*schema.Schema{ - "cluster_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `Required. The resource name of the game server cluster`, - }, - "connection_info": { - Type: schema.TypeList, - Required: true, - ForceNew: true, - Description: `Game server cluster connection information. This information is used to -manage game server clusters.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "gke_cluster_reference": { - Type: schema.TypeList, - Required: true, - ForceNew: true, - Description: `Reference of the GKE cluster where the game servers are installed.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "cluster": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: suppressSuffixDiff, - Description: `The full or partial name of a GKE cluster, using one of the following -forms: - -* 'projects/{project_id}/locations/{location}/clusters/{cluster_id}' -* 'locations/{location}/clusters/{cluster_id}' -* '{cluster_id}' - -If project and location are not specified, the project and location of the -GameServerCluster resource are used to generate the full name of the -GKE cluster.`, - }, - }, - }, - }, - "namespace": { - Type: schema.TypeString, - Required: true, - Description: `Namespace designated on the game server cluster where the game server -instances will be created. The namespace existence will be validated -during creation.`, - }, - }, - }, - }, - "realm_id": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, - Description: `The realm id of the game server realm.`, - }, - "description": { - Type: schema.TypeString, - Optional: true, - Description: `Human readable description of the cluster.`, - }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels associated with this game server cluster. Each label is a -key-value pair.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "location": { - Type: schema.TypeString, - Optional: true, - Description: `Location of the Cluster.`, - Default: "global", - }, - "name": { - Type: schema.TypeString, - Computed: true, - Description: `The resource id of the game server cluster, eg: - -'projects/{project_id}/locations/{location}/realms/{realm_id}/gameServerClusters/{cluster_id}'. -For example, - -'projects/my-project/locations/{location}/realms/zanzibar/gameServerClusters/my-onprem-cluster'.`, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - }, - UseJSONNumber: true, - } -} - -func resourceGameServicesGameServerClusterCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - labelsProp, err := expandGameServicesGameServerClusterLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - connectionInfoProp, err := expandGameServicesGameServerClusterConnectionInfo(d.Get("connection_info"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("connection_info"); !tpgresource.IsEmptyValue(reflect.ValueOf(connectionInfoProp)) && (ok || !reflect.DeepEqual(v, connectionInfoProp)) { - obj["connectionInfo"] = connectionInfoProp - } - descriptionProp, err := expandGameServicesGameServerClusterDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters?gameServerClusterId={{cluster_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Creating new GameServerCluster: %#v", obj) - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerCluster: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return fmt.Errorf("Error creating GameServerCluster: %s", err) - } - - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - // Use the resource in the operation response to populate - // identity fields and d.Id() before read - var opRes map[string]interface{} - err = GameServicesOperationWaitTimeWithResponse( - config, res, &opRes, project, "Creating GameServerCluster", userAgent, - d.Timeout(schema.TimeoutCreate)) - if err != nil { - // The resource didn't actually create - d.SetId("") - - return fmt.Errorf("Error waiting to create GameServerCluster: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerClusterName(opRes["name"], d, config)); err != nil { - return err - } - - // This may have caused the ID to update - update it if so. - id, err = tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Finished creating GameServerCluster %q: %#v", d.Id(), res) - - return resourceGameServicesGameServerClusterRead(d, meta) -} - -func resourceGameServicesGameServerClusterRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}") - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerCluster: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GameServicesGameServerCluster %q", d.Id())) - } - - if err := d.Set("project", project); err != nil { - return fmt.Errorf("Error reading GameServerCluster: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerClusterName(res["name"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerCluster: %s", err) - } - if err := d.Set("labels", flattenGameServicesGameServerClusterLabels(res["labels"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerCluster: %s", err) - } - if err := d.Set("connection_info", flattenGameServicesGameServerClusterConnectionInfo(res["connectionInfo"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerCluster: %s", err) - } - if err := d.Set("description", flattenGameServicesGameServerClusterDescription(res["description"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerCluster: %s", err) - } - - return nil -} - -func resourceGameServicesGameServerClusterUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerCluster: %s", err) - } - billingProject = project - - obj := make(map[string]interface{}) - labelsProp, err := expandGameServicesGameServerClusterLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - descriptionProp, err := expandGameServicesGameServerClusterDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Updating GameServerCluster %q: %#v", d.Id(), obj) - updateMask := []string{} - - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - - if d.HasChange("description") { - updateMask = append(updateMask, "description") - } - // updateMask is a URL parameter but not present in the schema, so ReplaceVars - // won't set it - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutUpdate), - }) - - if err != nil { - return fmt.Errorf("Error updating GameServerCluster %q: %s", d.Id(), err) - } else { - log.Printf("[DEBUG] Finished updating GameServerCluster %q: %#v", d.Id(), res) - } - - err = GameServicesOperationWaitTime( - config, res, project, "Updating GameServerCluster", userAgent, - d.Timeout(schema.TimeoutUpdate)) - - if err != nil { - return err - } - - return resourceGameServicesGameServerClusterRead(d, meta) -} - -func resourceGameServicesGameServerClusterDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerCluster: %s", err) - } - billingProject = project - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting GameServerCluster %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "GameServerCluster") - } - - err = GameServicesOperationWaitTime( - config, res, project, "Deleting GameServerCluster", userAgent, - d.Timeout(schema.TimeoutDelete)) - - if err != nil { - return err - } - - log.Printf("[DEBUG] Finished deleting GameServerCluster %q: %#v", d.Id(), res) - return nil -} - -func resourceGameServicesGameServerClusterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/realms/(?P[^/]+)/gameServerClusters/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenGameServicesGameServerClusterName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerClusterLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerClusterConnectionInfo(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["gke_cluster_reference"] = - flattenGameServicesGameServerClusterConnectionInfoGkeClusterReference(original["gkeClusterReference"], d, config) - transformed["namespace"] = - flattenGameServicesGameServerClusterConnectionInfoNamespace(original["namespace"], d, config) - return []interface{}{transformed} -} -func flattenGameServicesGameServerClusterConnectionInfoGkeClusterReference(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["cluster"] = - flattenGameServicesGameServerClusterConnectionInfoGkeClusterReferenceCluster(original["cluster"], d, config) - return []interface{}{transformed} -} -func flattenGameServicesGameServerClusterConnectionInfoGkeClusterReferenceCluster(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerClusterConnectionInfoNamespace(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerClusterDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandGameServicesGameServerClusterLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - -func expandGameServicesGameServerClusterConnectionInfo(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedGkeClusterReference, err := expandGameServicesGameServerClusterConnectionInfoGkeClusterReference(original["gke_cluster_reference"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedGkeClusterReference); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["gkeClusterReference"] = transformedGkeClusterReference - } - - transformedNamespace, err := expandGameServicesGameServerClusterConnectionInfoNamespace(original["namespace"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedNamespace); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["namespace"] = transformedNamespace - } - - return transformed, nil -} - -func expandGameServicesGameServerClusterConnectionInfoGkeClusterReference(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedCluster, err := expandGameServicesGameServerClusterConnectionInfoGkeClusterReferenceCluster(original["cluster"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedCluster); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["cluster"] = transformedCluster - } - - return transformed, nil -} - -func expandGameServicesGameServerClusterConnectionInfoGkeClusterReferenceCluster(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerClusterConnectionInfoNamespace(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerClusterDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/gameservices/resource_game_services_game_server_config.go b/google/services/gameservices/resource_game_services_game_server_config.go deleted file mode 100644 index 598e6729df2..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_config.go +++ /dev/null @@ -1,771 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "fmt" - "log" - "reflect" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func ResourceGameServicesGameServerConfig() *schema.Resource { - return &schema.Resource{ - Create: resourceGameServicesGameServerConfigCreate, - Read: resourceGameServicesGameServerConfigRead, - Delete: resourceGameServicesGameServerConfigDelete, - - Importer: &schema.ResourceImporter{ - State: resourceGameServicesGameServerConfigImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - Schema: map[string]*schema.Schema{ - "config_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `A unique id for the deployment config.`, - }, - "deployment_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, - Description: `A unique id for the deployment.`, - }, - "fleet_configs": { - Type: schema.TypeList, - Required: true, - ForceNew: true, - Description: `The fleet config contains list of fleet specs. In the Single Cloud, there -will be only one.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "fleet_spec": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `The fleet spec, which is sent to Agones to configure fleet. -The spec can be passed as inline json but it is recommended to use a file reference -instead. File references can contain the json or yaml format of the fleet spec. Eg: - -* fleet_spec = jsonencode(yamldecode(file("fleet_configs.yaml"))) -* fleet_spec = file("fleet_configs.json") - -The format of the spec can be found : -'https://agones.dev/site/docs/reference/fleet/'.`, - }, - "name": { - Type: schema.TypeString, - Computed: true, - Optional: true, - ForceNew: true, - Description: `The name of the FleetConfig.`, - }, - }, - }, - }, - "description": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `The description of the game server config.`, - }, - "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `The labels associated with this game server config. Each label is a -key-value pair.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "location": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `Location of the Deployment.`, - Default: "global", - }, - "scaling_configs": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Description: `Optional. This contains the autoscaling settings.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "fleet_autoscaler_spec": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `Fleet autoscaler spec, which is sent to Agones. -Example spec can be found : -https://agones.dev/site/docs/reference/fleetautoscaler/`, - }, - "name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `The name of the ScalingConfig`, - }, - "schedules": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Description: `The schedules to which this scaling config applies.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "cron_job_duration": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `The duration for the cron job event. The duration of the event is effective -after the cron job's start time. - -A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".`, - }, - "cron_spec": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `The cron definition of the scheduled event. See -https://en.wikipedia.org/wiki/Cron. Cron spec specifies the local time as -defined by the realm.`, - }, - "end_time": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `The end time of the event. - -A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, - }, - "start_time": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - Description: `The start time of the event. - -A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".`, - }, - }, - }, - }, - "selectors": { - Type: schema.TypeList, - Optional: true, - ForceNew: true, - Description: `Labels used to identify the clusters to which this scaling config -applies. A cluster is subject to this scaling config if its labels match -any of the selector entries.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `Set of labels to group by.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - }, - }, - }, - }, - }, - }, - "name": { - Type: schema.TypeString, - Computed: true, - Description: `The resource name of the game server config, in the form: - -'projects/{project_id}/locations/{location}/gameServerDeployments/{deployment_id}/configs/{config_id}'.`, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - }, - UseJSONNumber: true, - } -} - -func resourceGameServicesGameServerConfigCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - descriptionProp, err := expandGameServicesGameServerConfigDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - labelsProp, err := expandGameServicesGameServerConfigLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - fleetConfigsProp, err := expandGameServicesGameServerConfigFleetConfigs(d.Get("fleet_configs"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("fleet_configs"); !tpgresource.IsEmptyValue(reflect.ValueOf(fleetConfigsProp)) && (ok || !reflect.DeepEqual(v, fleetConfigsProp)) { - obj["fleetConfigs"] = fleetConfigsProp - } - scalingConfigsProp, err := expandGameServicesGameServerConfigScalingConfigs(d.Get("scaling_configs"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("scaling_configs"); !tpgresource.IsEmptyValue(reflect.ValueOf(scalingConfigsProp)) && (ok || !reflect.DeepEqual(v, scalingConfigsProp)) { - obj["scalingConfigs"] = scalingConfigsProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs?configId={{config_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Creating new GameServerConfig: %#v", obj) - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerConfig: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return fmt.Errorf("Error creating GameServerConfig: %s", err) - } - - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - // Use the resource in the operation response to populate - // identity fields and d.Id() before read - var opRes map[string]interface{} - err = GameServicesOperationWaitTimeWithResponse( - config, res, &opRes, project, "Creating GameServerConfig", userAgent, - d.Timeout(schema.TimeoutCreate)) - if err != nil { - // The resource didn't actually create - d.SetId("") - - return fmt.Errorf("Error waiting to create GameServerConfig: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerConfigName(opRes["name"], d, config)); err != nil { - return err - } - - // This may have caused the ID to update - update it if so. - id, err = tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Finished creating GameServerConfig %q: %#v", d.Id(), res) - - return resourceGameServicesGameServerConfigRead(d, meta) -} - -func resourceGameServicesGameServerConfigRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}") - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerConfig: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GameServicesGameServerConfig %q", d.Id())) - } - - if err := d.Set("project", project); err != nil { - return fmt.Errorf("Error reading GameServerConfig: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerConfigName(res["name"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerConfig: %s", err) - } - if err := d.Set("description", flattenGameServicesGameServerConfigDescription(res["description"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerConfig: %s", err) - } - if err := d.Set("labels", flattenGameServicesGameServerConfigLabels(res["labels"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerConfig: %s", err) - } - if err := d.Set("fleet_configs", flattenGameServicesGameServerConfigFleetConfigs(res["fleetConfigs"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerConfig: %s", err) - } - if err := d.Set("scaling_configs", flattenGameServicesGameServerConfigScalingConfigs(res["scalingConfigs"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerConfig: %s", err) - } - - return nil -} - -func resourceGameServicesGameServerConfigDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerConfig: %s", err) - } - billingProject = project - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting GameServerConfig %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "GameServerConfig") - } - - err = GameServicesOperationWaitTime( - config, res, project, "Deleting GameServerConfig", userAgent, - d.Timeout(schema.TimeoutDelete)) - - if err != nil { - return err - } - - log.Printf("[DEBUG] Finished deleting GameServerConfig %q: %#v", d.Id(), res) - return nil -} - -func resourceGameServicesGameServerConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/gameServerDeployments/(?P[^/]+)/configs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenGameServicesGameServerConfigName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigFleetConfigs(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "fleet_spec": flattenGameServicesGameServerConfigFleetConfigsFleetSpec(original["fleetSpec"], d, config), - "name": flattenGameServicesGameServerConfigFleetConfigsName(original["name"], d, config), - }) - } - return transformed -} -func flattenGameServicesGameServerConfigFleetConfigsFleetSpec(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigFleetConfigsName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigs(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "name": flattenGameServicesGameServerConfigScalingConfigsName(original["name"], d, config), - "fleet_autoscaler_spec": flattenGameServicesGameServerConfigScalingConfigsFleetAutoscalerSpec(original["fleetAutoscalerSpec"], d, config), - "selectors": flattenGameServicesGameServerConfigScalingConfigsSelectors(original["selectors"], d, config), - "schedules": flattenGameServicesGameServerConfigScalingConfigsSchedules(original["schedules"], d, config), - }) - } - return transformed -} -func flattenGameServicesGameServerConfigScalingConfigsName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigsFleetAutoscalerSpec(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigsSelectors(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "labels": flattenGameServicesGameServerConfigScalingConfigsSelectorsLabels(original["labels"], d, config), - }) - } - return transformed -} -func flattenGameServicesGameServerConfigScalingConfigsSelectorsLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigsSchedules(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "start_time": flattenGameServicesGameServerConfigScalingConfigsSchedulesStartTime(original["startTime"], d, config), - "end_time": flattenGameServicesGameServerConfigScalingConfigsSchedulesEndTime(original["endTime"], d, config), - "cron_job_duration": flattenGameServicesGameServerConfigScalingConfigsSchedulesCronJobDuration(original["cronJobDuration"], d, config), - "cron_spec": flattenGameServicesGameServerConfigScalingConfigsSchedulesCronSpec(original["cronSpec"], d, config), - }) - } - return transformed -} -func flattenGameServicesGameServerConfigScalingConfigsSchedulesStartTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigsSchedulesEndTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigsSchedulesCronJobDuration(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerConfigScalingConfigsSchedulesCronSpec(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandGameServicesGameServerConfigDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - -func expandGameServicesGameServerConfigFleetConfigs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedFleetSpec, err := expandGameServicesGameServerConfigFleetConfigsFleetSpec(original["fleet_spec"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedFleetSpec); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["fleetSpec"] = transformedFleetSpec - } - - transformedName, err := expandGameServicesGameServerConfigFleetConfigsName(original["name"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["name"] = transformedName - } - - req = append(req, transformed) - } - return req, nil -} - -func expandGameServicesGameServerConfigFleetConfigsFleetSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigFleetConfigsName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigScalingConfigs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedName, err := expandGameServicesGameServerConfigScalingConfigsName(original["name"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedName); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["name"] = transformedName - } - - transformedFleetAutoscalerSpec, err := expandGameServicesGameServerConfigScalingConfigsFleetAutoscalerSpec(original["fleet_autoscaler_spec"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedFleetAutoscalerSpec); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["fleetAutoscalerSpec"] = transformedFleetAutoscalerSpec - } - - transformedSelectors, err := expandGameServicesGameServerConfigScalingConfigsSelectors(original["selectors"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedSelectors); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["selectors"] = transformedSelectors - } - - transformedSchedules, err := expandGameServicesGameServerConfigScalingConfigsSchedules(original["schedules"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedSchedules); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["schedules"] = transformedSchedules - } - - req = append(req, transformed) - } - return req, nil -} - -func expandGameServicesGameServerConfigScalingConfigsName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigScalingConfigsFleetAutoscalerSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSelectors(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedLabels, err := expandGameServicesGameServerConfigScalingConfigsSelectorsLabels(original["labels"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedLabels); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["labels"] = transformedLabels - } - - req = append(req, transformed) - } - return req, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSelectorsLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSchedules(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedStartTime, err := expandGameServicesGameServerConfigScalingConfigsSchedulesStartTime(original["start_time"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedStartTime); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["startTime"] = transformedStartTime - } - - transformedEndTime, err := expandGameServicesGameServerConfigScalingConfigsSchedulesEndTime(original["end_time"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedEndTime); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["endTime"] = transformedEndTime - } - - transformedCronJobDuration, err := expandGameServicesGameServerConfigScalingConfigsSchedulesCronJobDuration(original["cron_job_duration"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedCronJobDuration); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["cronJobDuration"] = transformedCronJobDuration - } - - transformedCronSpec, err := expandGameServicesGameServerConfigScalingConfigsSchedulesCronSpec(original["cron_spec"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedCronSpec); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["cronSpec"] = transformedCronSpec - } - - req = append(req, transformed) - } - return req, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSchedulesStartTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSchedulesEndTime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSchedulesCronJobDuration(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerConfigScalingConfigsSchedulesCronSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/gameservices/resource_game_services_game_server_config_sweeper.go b/google/services/gameservices/resource_game_services_game_server_config_sweeper.go deleted file mode 100644 index e9964a4cb0f..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_config_sweeper.go +++ /dev/null @@ -1,143 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "context" - "log" - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/sweeper" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func init() { - sweeper.AddTestSweepers("GameServicesGameServerConfig", testSweepGameServicesGameServerConfig) -} - -// At the time of writing, the CI only passes us-central1 as the region -func testSweepGameServicesGameServerConfig(region string) error { - resourceName := "GameServicesGameServerConfig" - log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) - - config, err := sweeper.SharedConfigForRegion(region) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) - return err - } - - err = config.LoadAndValidate(context.Background()) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) - return err - } - - t := &testing.T{} - billingId := envvar.GetTestBillingAccountFromEnv(t) - - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://gameservices.googleapis.com/v1/projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["gameServerConfigs"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - var name string - // Id detected in the delete URL, attempt to use id. - if obj["id"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["id"].(string)) - } else if obj["name"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - } else { - log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) - return nil - } - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue - } - - deleteTemplate := "https://gameservices.googleapis.com/v1/projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name - - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: config.Project, - RawURL: deleteUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) - } - } - - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) - } - - return nil -} diff --git a/google/services/gameservices/resource_game_services_game_server_deployment.go b/google/services/gameservices/resource_game_services_game_server_deployment.go deleted file mode 100644 index 5f328bc2aa7..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_deployment.go +++ /dev/null @@ -1,419 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "fmt" - "log" - "reflect" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func ResourceGameServicesGameServerDeployment() *schema.Resource { - return &schema.Resource{ - Create: resourceGameServicesGameServerDeploymentCreate, - Read: resourceGameServicesGameServerDeploymentRead, - Update: resourceGameServicesGameServerDeploymentUpdate, - Delete: resourceGameServicesGameServerDeploymentDelete, - - Importer: &schema.ResourceImporter{ - State: resourceGameServicesGameServerDeploymentImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - Schema: map[string]*schema.Schema{ - "deployment_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `A unique id for the deployment.`, - }, - "description": { - Type: schema.TypeString, - Optional: true, - Description: `Human readable description of the game server deployment.`, - }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels associated with this game server deployment. Each label is a -key-value pair.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "location": { - Type: schema.TypeString, - Optional: true, - Description: `Location of the Deployment.`, - Default: "global", - }, - "name": { - Type: schema.TypeString, - Computed: true, - Description: `The resource id of the game server deployment, eg: - -'projects/{project_id}/locations/{location}/gameServerDeployments/{deployment_id}'. -For example, - -'projects/my-project/locations/{location}/gameServerDeployments/my-deployment'.`, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - }, - UseJSONNumber: true, - } -} - -func resourceGameServicesGameServerDeploymentCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - descriptionProp, err := expandGameServicesGameServerDeploymentDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - labelsProp, err := expandGameServicesGameServerDeploymentLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments?deploymentId={{deployment_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Creating new GameServerDeployment: %#v", obj) - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeployment: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return fmt.Errorf("Error creating GameServerDeployment: %s", err) - } - - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - // Use the resource in the operation response to populate - // identity fields and d.Id() before read - var opRes map[string]interface{} - err = GameServicesOperationWaitTimeWithResponse( - config, res, &opRes, project, "Creating GameServerDeployment", userAgent, - d.Timeout(schema.TimeoutCreate)) - if err != nil { - // The resource didn't actually create - d.SetId("") - - return fmt.Errorf("Error waiting to create GameServerDeployment: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerDeploymentName(opRes["name"], d, config)); err != nil { - return err - } - - // This may have caused the ID to update - update it if so. - id, err = tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Finished creating GameServerDeployment %q: %#v", d.Id(), res) - - return resourceGameServicesGameServerDeploymentRead(d, meta) -} - -func resourceGameServicesGameServerDeploymentRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}") - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeployment: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GameServicesGameServerDeployment %q", d.Id())) - } - - if err := d.Set("project", project); err != nil { - return fmt.Errorf("Error reading GameServerDeployment: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerDeploymentName(res["name"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerDeployment: %s", err) - } - if err := d.Set("description", flattenGameServicesGameServerDeploymentDescription(res["description"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerDeployment: %s", err) - } - if err := d.Set("labels", flattenGameServicesGameServerDeploymentLabels(res["labels"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerDeployment: %s", err) - } - - return nil -} - -func resourceGameServicesGameServerDeploymentUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeployment: %s", err) - } - billingProject = project - - obj := make(map[string]interface{}) - descriptionProp, err := expandGameServicesGameServerDeploymentDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - labelsProp, err := expandGameServicesGameServerDeploymentLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Updating GameServerDeployment %q: %#v", d.Id(), obj) - updateMask := []string{} - - if d.HasChange("description") { - updateMask = append(updateMask, "description") - } - - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - // updateMask is a URL parameter but not present in the schema, so ReplaceVars - // won't set it - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutUpdate), - }) - - if err != nil { - return fmt.Errorf("Error updating GameServerDeployment %q: %s", d.Id(), err) - } else { - log.Printf("[DEBUG] Finished updating GameServerDeployment %q: %#v", d.Id(), res) - } - - err = GameServicesOperationWaitTime( - config, res, project, "Updating GameServerDeployment", userAgent, - d.Timeout(schema.TimeoutUpdate)) - - if err != nil { - return err - } - - return resourceGameServicesGameServerDeploymentRead(d, meta) -} - -func resourceGameServicesGameServerDeploymentDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeployment: %s", err) - } - billingProject = project - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting GameServerDeployment %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "GameServerDeployment") - } - - err = GameServicesOperationWaitTime( - config, res, project, "Deleting GameServerDeployment", userAgent, - d.Timeout(schema.TimeoutDelete)) - - if err != nil { - return err - } - - log.Printf("[DEBUG] Finished deleting GameServerDeployment %q: %#v", d.Id(), res) - return nil -} - -func resourceGameServicesGameServerDeploymentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/gameServerDeployments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenGameServicesGameServerDeploymentName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerDeploymentDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerDeploymentLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandGameServicesGameServerDeploymentDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerDeploymentLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} diff --git a/google/services/gameservices/resource_game_services_game_server_deployment_rollout.go b/google/services/gameservices/resource_game_services_game_server_deployment_rollout.go deleted file mode 100644 index 5e5ba271526..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_deployment_rollout.go +++ /dev/null @@ -1,453 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "fmt" - "log" - "reflect" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func ResourceGameServicesGameServerDeploymentRollout() *schema.Resource { - return &schema.Resource{ - Create: resourceGameServicesGameServerDeploymentRolloutCreate, - Read: resourceGameServicesGameServerDeploymentRolloutRead, - Update: resourceGameServicesGameServerDeploymentRolloutUpdate, - Delete: resourceGameServicesGameServerDeploymentRolloutDelete, - - Importer: &schema.ResourceImporter{ - State: resourceGameServicesGameServerDeploymentRolloutImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - Schema: map[string]*schema.Schema{ - "default_game_server_config": { - Type: schema.TypeString, - Required: true, - Description: `This field points to the game server config that is -applied by default to all realms and clusters. For example, - -'projects/my-project/locations/global/gameServerDeployments/my-game/configs/my-config'.`, - }, - "deployment_id": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, - Description: `The deployment to rollout the new config to. Only 1 rollout must be associated with each deployment.`, - }, - "game_server_config_overrides": { - Type: schema.TypeList, - Optional: true, - Description: `The game_server_config_overrides contains the per game server config -overrides. The overrides are processed in the order they are listed. As -soon as a match is found for a cluster, the rest of the list is not -processed.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "config_version": { - Type: schema.TypeString, - Optional: true, - Description: `Version of the configuration.`, - }, - "realms_selector": { - Type: schema.TypeList, - Optional: true, - Description: `Selection by realms.`, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "realms": { - Type: schema.TypeList, - Optional: true, - Description: `List of realms to match against.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, - }, - }, - }, - }, - }, - }, - "name": { - Type: schema.TypeString, - Computed: true, - Description: `The resource id of the game server deployment - -eg: 'projects/my-project/locations/global/gameServerDeployments/my-deployment/rollout'.`, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - }, - UseJSONNumber: true, - } -} - -func resourceGameServicesGameServerDeploymentRolloutCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Creating GameServerDeploymentRollout %q: ", d.Id()) - - err = resourceGameServicesGameServerDeploymentRolloutUpdate(d, meta) - if err != nil { - d.SetId("") - return fmt.Errorf("Error trying to create GameServerDeploymentRollout: %s", err) - } - - return nil -} - -func resourceGameServicesGameServerDeploymentRolloutRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout") - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeploymentRollout: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GameServicesGameServerDeploymentRollout %q", d.Id())) - } - - if err := d.Set("project", project); err != nil { - return fmt.Errorf("Error reading GameServerDeploymentRollout: %s", err) - } - - if err := d.Set("name", flattenGameServicesGameServerDeploymentRolloutName(res["name"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerDeploymentRollout: %s", err) - } - if err := d.Set("default_game_server_config", flattenGameServicesGameServerDeploymentRolloutDefaultGameServerConfig(res["defaultGameServerConfig"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerDeploymentRollout: %s", err) - } - if err := d.Set("game_server_config_overrides", flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverrides(res["gameServerConfigOverrides"], d, config)); err != nil { - return fmt.Errorf("Error reading GameServerDeploymentRollout: %s", err) - } - - return nil -} - -func resourceGameServicesGameServerDeploymentRolloutUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeploymentRollout: %s", err) - } - billingProject = project - - obj := make(map[string]interface{}) - defaultGameServerConfigProp, err := expandGameServicesGameServerDeploymentRolloutDefaultGameServerConfig(d.Get("default_game_server_config"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("default_game_server_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, defaultGameServerConfigProp)) { - obj["defaultGameServerConfig"] = defaultGameServerConfigProp - } - gameServerConfigOverridesProp, err := expandGameServicesGameServerDeploymentRolloutGameServerConfigOverrides(d.Get("game_server_config_overrides"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("game_server_config_overrides"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, gameServerConfigOverridesProp)) { - obj["gameServerConfigOverrides"] = gameServerConfigOverridesProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout") - if err != nil { - return err - } - - log.Printf("[DEBUG] Updating GameServerDeploymentRollout %q: %#v", d.Id(), obj) - updateMask := []string{} - - if d.HasChange("default_game_server_config") { - updateMask = append(updateMask, "defaultGameServerConfig") - } - - if d.HasChange("game_server_config_overrides") { - updateMask = append(updateMask, "gameServerConfigOverrides") - } - // updateMask is a URL parameter but not present in the schema, so ReplaceVars - // won't set it - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutUpdate), - }) - - if err != nil { - return fmt.Errorf("Error updating GameServerDeploymentRollout %q: %s", d.Id(), err) - } else { - log.Printf("[DEBUG] Finished updating GameServerDeploymentRollout %q: %#v", d.Id(), res) - } - - err = GameServicesOperationWaitTime( - config, res, project, "Updating GameServerDeploymentRollout", userAgent, - d.Timeout(schema.TimeoutUpdate)) - - if err != nil { - return err - } - - return resourceGameServicesGameServerDeploymentRolloutRead(d, meta) -} - -func resourceGameServicesGameServerDeploymentRolloutDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for GameServerDeploymentRollout: %s", err) - } - billingProject = project - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout?updateMask=defaultGameServerConfig") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting GameServerDeploymentRollout %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "GameServerDeploymentRollout") - } - - err = GameServicesOperationWaitTime( - config, res, project, "Deleting GameServerDeploymentRollout", userAgent, - d.Timeout(schema.TimeoutDelete)) - - if err != nil { - return err - } - - log.Printf("[DEBUG] Finished deleting GameServerDeploymentRollout %q: %#v", d.Id(), res) - return nil -} - -func resourceGameServicesGameServerDeploymentRolloutImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/gameServerDeployments/(?P[^/]+)/rollout", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenGameServicesGameServerDeploymentRolloutName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerDeploymentRolloutDefaultGameServerConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverrides(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "realms_selector": flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelector(original["realmsSelector"], d, config), - "config_version": flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverridesConfigVersion(original["configVersion"], d, config), - }) - } - return transformed -} -func flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelector(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["realms"] = - flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelectorRealms(original["realms"], d, config) - return []interface{}{transformed} -} -func flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelectorRealms(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesGameServerDeploymentRolloutGameServerConfigOverridesConfigVersion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandGameServicesGameServerDeploymentRolloutDefaultGameServerConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerDeploymentRolloutGameServerConfigOverrides(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - req := make([]interface{}, 0, len(l)) - for _, raw := range l { - if raw == nil { - continue - } - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedRealmsSelector, err := expandGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelector(original["realms_selector"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedRealmsSelector); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["realmsSelector"] = transformedRealmsSelector - } - - transformedConfigVersion, err := expandGameServicesGameServerDeploymentRolloutGameServerConfigOverridesConfigVersion(original["config_version"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedConfigVersion); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["configVersion"] = transformedConfigVersion - } - - req = append(req, transformed) - } - return req, nil -} - -func expandGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelector(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - l := v.([]interface{}) - if len(l) == 0 || l[0] == nil { - return nil, nil - } - raw := l[0] - original := raw.(map[string]interface{}) - transformed := make(map[string]interface{}) - - transformedRealms, err := expandGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelectorRealms(original["realms"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedRealms); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["realms"] = transformedRealms - } - - return transformed, nil -} - -func expandGameServicesGameServerDeploymentRolloutGameServerConfigOverridesRealmsSelectorRealms(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesGameServerDeploymentRolloutGameServerConfigOverridesConfigVersion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/gameservices/resource_game_services_game_server_deployment_rollout_sweeper.go b/google/services/gameservices/resource_game_services_game_server_deployment_rollout_sweeper.go deleted file mode 100644 index 623219561df..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_deployment_rollout_sweeper.go +++ /dev/null @@ -1,143 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "context" - "log" - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/sweeper" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func init() { - sweeper.AddTestSweepers("GameServicesGameServerDeploymentRollout", testSweepGameServicesGameServerDeploymentRollout) -} - -// At the time of writing, the CI only passes us-central1 as the region -func testSweepGameServicesGameServerDeploymentRollout(region string) error { - resourceName := "GameServicesGameServerDeploymentRollout" - log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) - - config, err := sweeper.SharedConfigForRegion(region) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) - return err - } - - err = config.LoadAndValidate(context.Background()) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) - return err - } - - t := &testing.T{} - billingId := envvar.GetTestBillingAccountFromEnv(t) - - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://gameservices.googleapis.com/v1/projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["gameServerDeploymentRollouts"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - var name string - // Id detected in the delete URL, attempt to use id. - if obj["id"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["id"].(string)) - } else if obj["name"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - } else { - log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) - return nil - } - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue - } - - deleteTemplate := "https://gameservices.googleapis.com/v1/projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout?updateMask=defaultGameServerConfig" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name - - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: config.Project, - RawURL: deleteUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) - } - } - - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) - } - - return nil -} diff --git a/google/services/gameservices/resource_game_services_game_server_deployment_sweeper.go b/google/services/gameservices/resource_game_services_game_server_deployment_sweeper.go deleted file mode 100644 index f311dfc4f9e..00000000000 --- a/google/services/gameservices/resource_game_services_game_server_deployment_sweeper.go +++ /dev/null @@ -1,143 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "context" - "log" - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/sweeper" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func init() { - sweeper.AddTestSweepers("GameServicesGameServerDeployment", testSweepGameServicesGameServerDeployment) -} - -// At the time of writing, the CI only passes us-central1 as the region -func testSweepGameServicesGameServerDeployment(region string) error { - resourceName := "GameServicesGameServerDeployment" - log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) - - config, err := sweeper.SharedConfigForRegion(region) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) - return err - } - - err = config.LoadAndValidate(context.Background()) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) - return err - } - - t := &testing.T{} - billingId := envvar.GetTestBillingAccountFromEnv(t) - - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://gameservices.googleapis.com/v1/projects/{{project}}/locations/{{location}}/gameServerDeployments", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["gameServerDeployments"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - var name string - // Id detected in the delete URL, attempt to use id. - if obj["id"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["id"].(string)) - } else if obj["name"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - } else { - log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) - return nil - } - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue - } - - deleteTemplate := "https://gameservices.googleapis.com/v1/projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name - - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: config.Project, - RawURL: deleteUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) - } - } - - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) - } - - return nil -} diff --git a/google/services/gameservices/resource_game_services_realm.go b/google/services/gameservices/resource_game_services_realm.go deleted file mode 100644 index 6bdbde246c4..00000000000 --- a/google/services/gameservices/resource_game_services_realm.go +++ /dev/null @@ -1,461 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "fmt" - "log" - "reflect" - "strings" - "time" - - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func ResourceGameServicesRealm() *schema.Resource { - return &schema.Resource{ - Create: resourceGameServicesRealmCreate, - Read: resourceGameServicesRealmRead, - Update: resourceGameServicesRealmUpdate, - Delete: resourceGameServicesRealmDelete, - - Importer: &schema.ResourceImporter{ - State: resourceGameServicesRealmImport, - }, - - Timeouts: &schema.ResourceTimeout{ - Create: schema.DefaultTimeout(20 * time.Minute), - Update: schema.DefaultTimeout(20 * time.Minute), - Delete: schema.DefaultTimeout(20 * time.Minute), - }, - - Schema: map[string]*schema.Schema{ - "realm_id": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - Description: `GCP region of the Realm.`, - }, - "time_zone": { - Type: schema.TypeString, - Required: true, - Description: `Required. Time zone where all realm-specific policies are evaluated. The value of -this field must be from the IANA time zone database: -https://www.iana.org/time-zones.`, - }, - "description": { - Type: schema.TypeString, - Optional: true, - Description: `Human readable description of the realm.`, - }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels associated with this realm. Each label is a key-value pair.`, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "location": { - Type: schema.TypeString, - Optional: true, - Description: `Location of the Realm.`, - Default: "global", - }, - "etag": { - Type: schema.TypeString, - Computed: true, - Description: `ETag of the resource.`, - }, - "name": { - Type: schema.TypeString, - Computed: true, - Description: `The resource id of the realm, of the form: -'projects/{project_id}/locations/{location}/realms/{realm_id}'. For -example, 'projects/my-project/locations/{location}/realms/my-realm'.`, - }, - "project": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - }, - }, - UseJSONNumber: true, - } -} - -func resourceGameServicesRealmCreate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - obj := make(map[string]interface{}) - labelsProp, err := expandGameServicesRealmLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - timeZoneProp, err := expandGameServicesRealmTimeZone(d.Get("time_zone"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("time_zone"); !tpgresource.IsEmptyValue(reflect.ValueOf(timeZoneProp)) && (ok || !reflect.DeepEqual(v, timeZoneProp)) { - obj["timeZone"] = timeZoneProp - } - descriptionProp, err := expandGameServicesRealmDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms?realmId={{realm_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Creating new Realm: %#v", obj) - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for Realm: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutCreate), - }) - if err != nil { - return fmt.Errorf("Error creating Realm: %s", err) - } - - // Store the ID now - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/realms/{{realm_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - // Use the resource in the operation response to populate - // identity fields and d.Id() before read - var opRes map[string]interface{} - err = GameServicesOperationWaitTimeWithResponse( - config, res, &opRes, project, "Creating Realm", userAgent, - d.Timeout(schema.TimeoutCreate)) - if err != nil { - // The resource didn't actually create - d.SetId("") - - return fmt.Errorf("Error waiting to create Realm: %s", err) - } - - if err := d.Set("name", flattenGameServicesRealmName(opRes["name"], d, config)); err != nil { - return err - } - - // This may have caused the ID to update - update it if so. - id, err = tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/realms/{{realm_id}}") - if err != nil { - return fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - log.Printf("[DEBUG] Finished creating Realm %q: %#v", d.Id(), res) - - return resourceGameServicesRealmRead(d, meta) -} - -func resourceGameServicesRealmRead(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}") - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for Realm: %s", err) - } - billingProject = project - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("GameServicesRealm %q", d.Id())) - } - - if err := d.Set("project", project); err != nil { - return fmt.Errorf("Error reading Realm: %s", err) - } - - if err := d.Set("name", flattenGameServicesRealmName(res["name"], d, config)); err != nil { - return fmt.Errorf("Error reading Realm: %s", err) - } - if err := d.Set("labels", flattenGameServicesRealmLabels(res["labels"], d, config)); err != nil { - return fmt.Errorf("Error reading Realm: %s", err) - } - if err := d.Set("time_zone", flattenGameServicesRealmTimeZone(res["timeZone"], d, config)); err != nil { - return fmt.Errorf("Error reading Realm: %s", err) - } - if err := d.Set("etag", flattenGameServicesRealmEtag(res["etag"], d, config)); err != nil { - return fmt.Errorf("Error reading Realm: %s", err) - } - if err := d.Set("description", flattenGameServicesRealmDescription(res["description"], d, config)); err != nil { - return fmt.Errorf("Error reading Realm: %s", err) - } - - return nil -} - -func resourceGameServicesRealmUpdate(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for Realm: %s", err) - } - billingProject = project - - obj := make(map[string]interface{}) - labelsProp, err := expandGameServicesRealmLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - timeZoneProp, err := expandGameServicesRealmTimeZone(d.Get("time_zone"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("time_zone"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, timeZoneProp)) { - obj["timeZone"] = timeZoneProp - } - descriptionProp, err := expandGameServicesRealmDescription(d.Get("description"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { - obj["description"] = descriptionProp - } - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}") - if err != nil { - return err - } - - log.Printf("[DEBUG] Updating Realm %q: %#v", d.Id(), obj) - updateMask := []string{} - - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - - if d.HasChange("time_zone") { - updateMask = append(updateMask, "timeZone") - } - - if d.HasChange("description") { - updateMask = append(updateMask, "description") - } - // updateMask is a URL parameter but not present in the schema, so ReplaceVars - // won't set it - url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) - if err != nil { - return err - } - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "PATCH", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutUpdate), - }) - - if err != nil { - return fmt.Errorf("Error updating Realm %q: %s", d.Id(), err) - } else { - log.Printf("[DEBUG] Finished updating Realm %q: %#v", d.Id(), res) - } - - err = GameServicesOperationWaitTime( - config, res, project, "Updating Realm", userAgent, - d.Timeout(schema.TimeoutUpdate)) - - if err != nil { - return err - } - - return resourceGameServicesRealmRead(d, meta) -} - -func resourceGameServicesRealmDelete(d *schema.ResourceData, meta interface{}) error { - config := meta.(*transport_tpg.Config) - userAgent, err := tpgresource.GenerateUserAgentString(d, config.UserAgent) - if err != nil { - return err - } - - billingProject := "" - - project, err := tpgresource.GetProject(d, config) - if err != nil { - return fmt.Errorf("Error fetching project for Realm: %s", err) - } - billingProject = project - - url, err := tpgresource.ReplaceVars(d, config, "{{GameServicesBasePath}}projects/{{project}}/locations/{{location}}/realms/{{realm_id}}") - if err != nil { - return err - } - - var obj map[string]interface{} - log.Printf("[DEBUG] Deleting Realm %q", d.Id()) - - // err == nil indicates that the billing_project value was found - if bp, err := tpgresource.GetBillingProject(d, config); err == nil { - billingProject = bp - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: billingProject, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) - if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "Realm") - } - - err = GameServicesOperationWaitTime( - config, res, project, "Deleting Realm", userAgent, - d.Timeout(schema.TimeoutDelete)) - - if err != nil { - return err - } - - log.Printf("[DEBUG] Finished deleting Realm %q: %#v", d.Id(), res) - return nil -} - -func resourceGameServicesRealmImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - config := meta.(*transport_tpg.Config) - if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/realms/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - }, d, config); err != nil { - return nil, err - } - - // Replace import id for the resource id - id, err := tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/realms/{{realm_id}}") - if err != nil { - return nil, fmt.Errorf("Error constructing id: %s", err) - } - d.SetId(id) - - return []*schema.ResourceData{d}, nil -} - -func flattenGameServicesRealmName(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesRealmLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesRealmTimeZone(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesRealmEtag(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenGameServicesRealmDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func expandGameServicesRealmLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - -func expandGameServicesRealmTimeZone(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandGameServicesRealmDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/gameservices/resource_game_services_realm_sweeper.go b/google/services/gameservices/resource_game_services_realm_sweeper.go deleted file mode 100644 index 67470bc17e7..00000000000 --- a/google/services/gameservices/resource_game_services_realm_sweeper.go +++ /dev/null @@ -1,143 +0,0 @@ -// Copyright (c) HashiCorp, Inc. -// SPDX-License-Identifier: MPL-2.0 - -// ---------------------------------------------------------------------------- -// -// *** AUTO GENERATED CODE *** Type: MMv1 *** -// -// ---------------------------------------------------------------------------- -// -// This file is automatically generated by Magic Modules and manual -// changes will be clobbered when the file is regenerated. -// -// Please read more about how to change this file in -// .github/CONTRIBUTING.md. -// -// ---------------------------------------------------------------------------- - -package gameservices - -import ( - "context" - "log" - "strings" - "testing" - - "github.com/hashicorp/terraform-provider-google/google/envvar" - "github.com/hashicorp/terraform-provider-google/google/sweeper" - "github.com/hashicorp/terraform-provider-google/google/tpgresource" - transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" -) - -func init() { - sweeper.AddTestSweepers("GameServicesRealm", testSweepGameServicesRealm) -} - -// At the time of writing, the CI only passes us-central1 as the region -func testSweepGameServicesRealm(region string) error { - resourceName := "GameServicesRealm" - log.Printf("[INFO][SWEEPER_LOG] Starting sweeper for %s", resourceName) - - config, err := sweeper.SharedConfigForRegion(region) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error getting shared config for region: %s", err) - return err - } - - err = config.LoadAndValidate(context.Background()) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error loading: %s", err) - return err - } - - t := &testing.T{} - billingId := envvar.GetTestBillingAccountFromEnv(t) - - // Setup variables to replace in list template - d := &tpgresource.ResourceDataMock{ - FieldsInSchema: map[string]interface{}{ - "project": config.Project, - "region": region, - "location": region, - "zone": "-", - "billing_account": billingId, - }, - } - - listTemplate := strings.Split("https://gameservices.googleapis.com/v1/projects/{{project}}/locations/{{location}}/realms", "?")[0] - listUrl, err := tpgresource.ReplaceVars(d, config, listTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing sweeper list url: %s", err) - return nil - } - - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "GET", - Project: config.Project, - RawURL: listUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error in response from request %s: %s", listUrl, err) - return nil - } - - resourceList, ok := res["realms"] - if !ok { - log.Printf("[INFO][SWEEPER_LOG] Nothing found in response.") - return nil - } - - rl := resourceList.([]interface{}) - - log.Printf("[INFO][SWEEPER_LOG] Found %d items in %s list response.", len(rl), resourceName) - // Keep count of items that aren't sweepable for logging. - nonPrefixCount := 0 - for _, ri := range rl { - obj := ri.(map[string]interface{}) - var name string - // Id detected in the delete URL, attempt to use id. - if obj["id"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["id"].(string)) - } else if obj["name"] != nil { - name = tpgresource.GetResourceNameFromSelfLink(obj["name"].(string)) - } else { - log.Printf("[INFO][SWEEPER_LOG] %s resource name and id were nil", resourceName) - return nil - } - // Skip resources that shouldn't be sweeped - if !sweeper.IsSweepableTestResource(name) { - nonPrefixCount++ - continue - } - - deleteTemplate := "https://gameservices.googleapis.com/v1/projects/{{project}}/locations/{{location}}/realms/{{realm_id}}" - deleteUrl, err := tpgresource.ReplaceVars(d, config, deleteTemplate) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] error preparing delete url: %s", err) - return nil - } - deleteUrl = deleteUrl + name - - // Don't wait on operations as we may have a lot to delete - _, err = transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "DELETE", - Project: config.Project, - RawURL: deleteUrl, - UserAgent: config.UserAgent, - }) - if err != nil { - log.Printf("[INFO][SWEEPER_LOG] Error deleting for url %s : %s", deleteUrl, err) - } else { - log.Printf("[INFO][SWEEPER_LOG] Sent delete request for %s resource: %s", resourceName, name) - } - } - - if nonPrefixCount > 0 { - log.Printf("[INFO][SWEEPER_LOG] %d items were non-sweepable and skipped.", nonPrefixCount) - } - - return nil -} diff --git a/google/services/gkebackup/iam_gke_backup_backup_plan_generated_test.go b/google/services/gkebackup/iam_gke_backup_backup_plan_generated_test.go index 17612e055bc..0cca7a57953 100644 --- a/google/services/gkebackup/iam_gke_backup_backup_plan_generated_test.go +++ b/google/services/gkebackup/iam_gke_backup_backup_plan_generated_test.go @@ -34,6 +34,8 @@ func TestAccGKEBackupBackupPlanIamBindingGenerated(t *testing.T) { "random_suffix": acctest.RandString(t, 10), "role": "roles/viewer", "project": envvar.GetTestProjectFromEnv(), + + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -70,6 +72,8 @@ func TestAccGKEBackupBackupPlanIamMemberGenerated(t *testing.T) { "random_suffix": acctest.RandString(t, 10), "role": "roles/viewer", "project": envvar.GetTestProjectFromEnv(), + + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -97,6 +101,8 @@ func TestAccGKEBackupBackupPlanIamPolicyGenerated(t *testing.T) { "random_suffix": acctest.RandString(t, 10), "role": "roles/viewer", "project": envvar.GetTestProjectFromEnv(), + + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -140,6 +146,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -177,6 +184,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -229,6 +237,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -268,6 +277,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -305,6 +315,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { diff --git a/google/services/gkebackup/iam_gke_backup_restore_plan_generated_test.go b/google/services/gkebackup/iam_gke_backup_restore_plan_generated_test.go index 34420b6e317..91e3e95a2fe 100644 --- a/google/services/gkebackup/iam_gke_backup_restore_plan_generated_test.go +++ b/google/services/gkebackup/iam_gke_backup_restore_plan_generated_test.go @@ -34,6 +34,8 @@ func TestAccGKEBackupRestorePlanIamBindingGenerated(t *testing.T) { "random_suffix": acctest.RandString(t, 10), "role": "roles/viewer", "project": envvar.GetTestProjectFromEnv(), + + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -70,6 +72,8 @@ func TestAccGKEBackupRestorePlanIamMemberGenerated(t *testing.T) { "random_suffix": acctest.RandString(t, 10), "role": "roles/viewer", "project": envvar.GetTestProjectFromEnv(), + + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -97,6 +101,8 @@ func TestAccGKEBackupRestorePlanIamPolicyGenerated(t *testing.T) { "random_suffix": acctest.RandString(t, 10), "role": "roles/viewer", "project": envvar.GetTestProjectFromEnv(), + + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -140,6 +146,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -193,6 +200,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -261,6 +269,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -316,6 +325,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -369,6 +379,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { diff --git a/google/services/gkebackup/resource_gke_backup_backup_plan.go b/google/services/gkebackup/resource_gke_backup_backup_plan.go index 62b4659220f..aed046c4573 100644 --- a/google/services/gkebackup/resource_gke_backup_backup_plan.go +++ b/google/services/gkebackup/resource_gke_backup_backup_plan.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceGKEBackupBackupPlan() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "cluster": { Type: schema.TypeString, @@ -203,7 +209,11 @@ from being created via this BackupPlan (including scheduled Backups).`, Optional: true, Description: `Description: A set of custom labels supplied by the user. A list of key->value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "retention_policy": { @@ -250,6 +260,12 @@ the locked field itself.`, }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -275,6 +291,13 @@ backupPlans.delete to ensure that their change will be applied to the same versi Computed: true, Description: `Detailed description of why BackupPlan is in its current state.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -323,12 +346,6 @@ func resourceGKEBackupBackupPlanCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("retention_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(retentionPolicyProp)) && (ok || !reflect.DeepEqual(v, retentionPolicyProp)) { obj["retentionPolicy"] = retentionPolicyProp } - labelsProp, err := expandGKEBackupBackupPlanLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } backupScheduleProp, err := expandGKEBackupBackupPlanBackupSchedule(d.Get("backup_schedule"), d, config) if err != nil { return err @@ -347,6 +364,12 @@ func resourceGKEBackupBackupPlanCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("backup_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(backupConfigProp)) && (ok || !reflect.DeepEqual(v, backupConfigProp)) { obj["backupConfig"] = backupConfigProp } + labelsProp, err := expandGKEBackupBackupPlanEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEBackupBasePath}}projects/{{project}}/locations/{{location}}/backupPlans?backupPlanId={{name}}") if err != nil { @@ -495,6 +518,12 @@ func resourceGKEBackupBackupPlanRead(d *schema.ResourceData, meta interface{}) e if err := d.Set("state_reason", flattenGKEBackupBackupPlanStateReason(res["stateReason"], d, config)); err != nil { return fmt.Errorf("Error reading BackupPlan: %s", err) } + if err := d.Set("terraform_labels", flattenGKEBackupBackupPlanTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading BackupPlan: %s", err) + } + if err := d.Set("effective_labels", flattenGKEBackupBackupPlanEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading BackupPlan: %s", err) + } return nil } @@ -527,12 +556,6 @@ func resourceGKEBackupBackupPlanUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("retention_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, retentionPolicyProp)) { obj["retentionPolicy"] = retentionPolicyProp } - labelsProp, err := expandGKEBackupBackupPlanLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } backupScheduleProp, err := expandGKEBackupBackupPlanBackupSchedule(d.Get("backup_schedule"), d, config) if err != nil { return err @@ -551,6 +574,12 @@ func resourceGKEBackupBackupPlanUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("backup_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, backupConfigProp)) { obj["backupConfig"] = backupConfigProp } + labelsProp, err := expandGKEBackupBackupPlanEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEBackupBasePath}}projects/{{project}}/locations/{{location}}/backupPlans/{{name}}") if err != nil { @@ -568,10 +597,6 @@ func resourceGKEBackupBackupPlanUpdate(d *schema.ResourceData, meta interface{}) updateMask = append(updateMask, "retentionPolicy") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("backup_schedule") { updateMask = append(updateMask, "backupSchedule") } @@ -583,6 +608,10 @@ func resourceGKEBackupBackupPlanUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("backup_config") { updateMask = append(updateMask, "backupConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -678,9 +707,9 @@ func resourceGKEBackupBackupPlanDelete(d *schema.ResourceData, meta interface{}) func resourceGKEBackupBackupPlanImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/backupPlans/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/backupPlans/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -770,7 +799,18 @@ func flattenGKEBackupBackupPlanRetentionPolicyLocked(v interface{}, d *schema.Re } func flattenGKEBackupBackupPlanLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenGKEBackupBackupPlanBackupSchedule(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -938,6 +978,25 @@ func flattenGKEBackupBackupPlanStateReason(v interface{}, d *schema.ResourceData return v } +func flattenGKEBackupBackupPlanTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEBackupBackupPlanEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandGKEBackupBackupPlanName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/backupPlans/{{name}}") } @@ -995,17 +1054,6 @@ func expandGKEBackupBackupPlanRetentionPolicyLocked(v interface{}, d tpgresource return v, nil } -func expandGKEBackupBackupPlanLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandGKEBackupBackupPlanBackupSchedule(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -1211,3 +1259,14 @@ func expandGKEBackupBackupPlanBackupConfigSelectedApplicationsNamespacedNamesNam func expandGKEBackupBackupPlanBackupConfigSelectedApplicationsNamespacedNamesName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandGKEBackupBackupPlanEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go b/google/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go index 206d333efb2..bb06c115688 100644 --- a/google/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go +++ b/google/services/gkebackup/resource_gke_backup_backup_plan_generated_test.go @@ -35,8 +35,9 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -51,7 +52,7 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanBasicExample(t *testing.T) { ResourceName: "google_gke_backup_backup_plan.basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -71,6 +72,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -90,7 +92,8 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanAutopilotExample(t *testing.T t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -105,7 +108,7 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanAutopilotExample(t *testing.T ResourceName: "google_gke_backup_backup_plan.autopilot", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -127,6 +130,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "autopilot" { @@ -146,8 +150,9 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanCmekExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -162,7 +167,7 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanCmekExample(t *testing.T) { ResourceName: "google_gke_backup_backup_plan.cmek", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -182,6 +187,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "cmek" { @@ -216,8 +222,9 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanFullExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -232,7 +239,7 @@ func TestAccGKEBackupBackupPlan_gkebackupBackupplanFullExample(t *testing.T) { ResourceName: "google_gke_backup_backup_plan.full", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -252,6 +259,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "full" { diff --git a/google/services/gkebackup/resource_gke_backup_backup_plan_test.go b/google/services/gkebackup/resource_gke_backup_backup_plan_test.go index 50e46acf7c4..61f1c51cf83 100644 --- a/google/services/gkebackup/resource_gke_backup_backup_plan_test.go +++ b/google/services/gkebackup/resource_gke_backup_backup_plan_test.go @@ -28,17 +28,19 @@ func TestAccGKEBackupBackupPlan_update(t *testing.T) { Config: testAccGKEBackupBackupPlan_basic(context), }, { - ResourceName: "google_gke_backup_backup_plan.backupplan", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_gke_backup_backup_plan.backupplan", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccGKEBackupBackupPlan_full(context), }, { - ResourceName: "google_gke_backup_backup_plan.backupplan", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_gke_backup_backup_plan.backupplan", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -58,6 +60,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = false } resource "google_gke_backup_backup_plan" "backupplan" { @@ -69,6 +72,9 @@ resource "google_gke_backup_backup_plan" "backupplan" { include_secrets = false all_namespaces = true } + labels = { + "some-key-1": "some-value-1" + } } `, context) } @@ -87,6 +93,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = false } resource "google_gke_backup_backup_plan" "backupplan" { @@ -114,6 +121,9 @@ resource "google_gke_backup_backup_plan" "backupplan" { } } } + labels = { + "some-key-2": "some-value-2" + } } `, context) } diff --git a/google/services/gkebackup/resource_gke_backup_restore_plan.go b/google/services/gkebackup/resource_gke_backup_restore_plan.go index e2c33b5b12c..b0a29557803 100644 --- a/google/services/gkebackup/resource_gke_backup_restore_plan.go +++ b/google/services/gkebackup/resource_gke_backup_restore_plan.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceGKEBackupRestorePlan() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "backup_plan": { Type: schema.TypeString, @@ -404,9 +410,19 @@ for more information on each policy option. Possible values: ["RESTORE_VOLUME_DA Optional: true, Description: `Description: A set of custom labels supplied by the user. A list of key->value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "state": { Type: schema.TypeString, Computed: true, @@ -417,6 +433,13 @@ Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, Computed: true, Description: `Detailed description of why RestorePlan is in its current state.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -453,12 +476,6 @@ func resourceGKEBackupRestorePlanCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandGKEBackupRestorePlanLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } backupPlanProp, err := expandGKEBackupRestorePlanBackupPlan(d.Get("backup_plan"), d, config) if err != nil { return err @@ -477,6 +494,12 @@ func resourceGKEBackupRestorePlanCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("restore_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(restoreConfigProp)) && (ok || !reflect.DeepEqual(v, restoreConfigProp)) { obj["restoreConfig"] = restoreConfigProp } + labelsProp, err := expandGKEBackupRestorePlanEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEBackupBasePath}}projects/{{project}}/locations/{{location}}/restorePlans?restorePlanId={{name}}") if err != nil { @@ -613,6 +636,12 @@ func resourceGKEBackupRestorePlanRead(d *schema.ResourceData, meta interface{}) if err := d.Set("state_reason", flattenGKEBackupRestorePlanStateReason(res["stateReason"], d, config)); err != nil { return fmt.Errorf("Error reading RestorePlan: %s", err) } + if err := d.Set("terraform_labels", flattenGKEBackupRestorePlanTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading RestorePlan: %s", err) + } + if err := d.Set("effective_labels", flattenGKEBackupRestorePlanEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading RestorePlan: %s", err) + } return nil } @@ -639,18 +668,18 @@ func resourceGKEBackupRestorePlanUpdate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandGKEBackupRestorePlanLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } restoreConfigProp, err := expandGKEBackupRestorePlanRestoreConfig(d.Get("restore_config"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("restore_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, restoreConfigProp)) { obj["restoreConfig"] = restoreConfigProp } + labelsProp, err := expandGKEBackupRestorePlanEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEBackupBasePath}}projects/{{project}}/locations/{{location}}/restorePlans/{{name}}") if err != nil { @@ -664,13 +693,13 @@ func resourceGKEBackupRestorePlanUpdate(d *schema.ResourceData, meta interface{} updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("restore_config") { updateMask = append(updateMask, "restoreConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -766,9 +795,9 @@ func resourceGKEBackupRestorePlanDelete(d *schema.ResourceData, meta interface{} func resourceGKEBackupRestorePlanImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/restorePlans/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/restorePlans/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -799,7 +828,18 @@ func flattenGKEBackupRestorePlanDescription(v interface{}, d *schema.ResourceDat } func flattenGKEBackupRestorePlanLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenGKEBackupRestorePlanBackupPlan(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1137,6 +1177,25 @@ func flattenGKEBackupRestorePlanStateReason(v interface{}, d *schema.ResourceDat return v } +func flattenGKEBackupRestorePlanTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEBackupRestorePlanEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandGKEBackupRestorePlanName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.ReplaceVars(d, config, "projects/{{project}}/locations/{{location}}/restorePlans/{{name}}") } @@ -1145,17 +1204,6 @@ func expandGKEBackupRestorePlanDescription(v interface{}, d tpgresource.Terrafor return v, nil } -func expandGKEBackupRestorePlanLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandGKEBackupRestorePlanBackupPlan(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1666,3 +1714,14 @@ func expandGKEBackupRestorePlanRestoreConfigTransformationRulesFieldActionsPath( func expandGKEBackupRestorePlanRestoreConfigTransformationRulesFieldActionsValue(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandGKEBackupRestorePlanEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/gkebackup/resource_gke_backup_restore_plan_generated_test.go b/google/services/gkebackup/resource_gke_backup_restore_plan_generated_test.go index bbc989ffe14..646f790b841 100644 --- a/google/services/gkebackup/resource_gke_backup_restore_plan_generated_test.go +++ b/google/services/gkebackup/resource_gke_backup_restore_plan_generated_test.go @@ -35,8 +35,9 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanAllNamespacesExample(t *tes t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -51,7 +52,7 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanAllNamespacesExample(t *tes ResourceName: "google_gke_backup_restore_plan.all_ns", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -71,6 +72,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -106,8 +108,9 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanRollbackNamespaceExample(t t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -122,7 +125,7 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanRollbackNamespaceExample(t ResourceName: "google_gke_backup_restore_plan.rollback_ns", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -142,6 +145,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -186,8 +190,9 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanProtectedApplicationExample t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -202,7 +207,7 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanProtectedApplicationExample ResourceName: "google_gke_backup_restore_plan.rollback_app", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -222,6 +227,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -261,8 +267,9 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanAllClusterResourcesExample( t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -277,7 +284,7 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanAllClusterResourcesExample( ResourceName: "google_gke_backup_restore_plan.all_cluster_resources", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -297,6 +304,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -331,8 +339,9 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanRenameNamespaceExample(t *t t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -347,7 +356,7 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanRenameNamespaceExample(t *t ResourceName: "google_gke_backup_restore_plan.rename_ns", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -367,6 +376,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { @@ -428,8 +438,9 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanSecondTransformationExample t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -444,7 +455,7 @@ func TestAccGKEBackupRestorePlan_gkebackupRestoreplanSecondTransformationExample ResourceName: "google_gke_backup_restore_plan.transform_rule", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"location"}, + ImportStateVerifyIgnore: []string{"location", "labels", "terraform_labels"}, }, }, }) @@ -464,6 +475,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "%{deletion_protection}" } resource "google_gke_backup_backup_plan" "basic" { diff --git a/google/services/gkehub/iam_gke_hub_membership_generated_test.go b/google/services/gkehub/iam_gke_hub_membership_generated_test.go index 3508404972f..67309cf4eb7 100644 --- a/google/services/gkehub/iam_gke_hub_membership_generated_test.go +++ b/google/services/gkehub/iam_gke_hub_membership_generated_test.go @@ -31,8 +31,9 @@ func TestAccGKEHubMembershipIamBindingGenerated(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "role": "roles/viewer", + "random_suffix": acctest.RandString(t, 10), + "role": "roles/viewer", + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -66,8 +67,9 @@ func TestAccGKEHubMembershipIamMemberGenerated(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "role": "roles/viewer", + "random_suffix": acctest.RandString(t, 10), + "role": "roles/viewer", + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -92,8 +94,9 @@ func TestAccGKEHubMembershipIamPolicyGenerated(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), - "role": "roles/viewer", + "random_suffix": acctest.RandString(t, 10), + "role": "roles/viewer", + "deletion_protection": false, } acctest.VcrTest(t, resource.TestCase{ @@ -129,6 +132,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { @@ -138,6 +142,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } resource "google_gke_hub_membership_iam_member" "foo" { @@ -155,6 +163,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { @@ -164,6 +173,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } data "google_iam_policy" "foo" { @@ -195,6 +208,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { @@ -204,6 +218,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } data "google_iam_policy" "foo" { @@ -223,6 +241,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { @@ -232,6 +251,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } resource "google_gke_hub_membership_iam_binding" "foo" { @@ -249,6 +272,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { @@ -258,6 +282,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } resource "google_gke_hub_membership_iam_binding" "foo" { diff --git a/google/services/gkehub/resource_gke_hub_feature_membership.go b/google/services/gkehub/resource_gke_hub_feature_membership.go index 51a7ac2b484..c0c43cd9b5d 100644 --- a/google/services/gkehub/resource_gke_hub_feature_membership.go +++ b/google/services/gkehub/resource_gke_hub_feature_membership.go @@ -932,6 +932,7 @@ func flattenGkeHubFeatureMembershipMesh(obj *gkehub.FeatureMembershipMesh) inter return []interface{}{transformed} } + func flattenGkeHubFeatureMembershipConfigmanagementPolicyControllerMonitoringBackendsArray(obj []gkehub.FeatureMembershipConfigmanagementPolicyControllerMonitoringBackendsEnum) interface{} { if obj == nil { return nil diff --git a/google/services/gkehub/resource_gke_hub_feature_membership_test.go b/google/services/gkehub/resource_gke_hub_feature_membership_test.go index a73bcb6d6dc..33eacbf09b1 100644 --- a/google/services/gkehub/resource_gke_hub_feature_membership_test.go +++ b/google/services/gkehub/resource_gke_hub_feature_membership_test.go @@ -388,6 +388,7 @@ resource "google_container_cluster" "primary" { name = "tf-test-cl%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -450,6 +451,7 @@ resource "google_container_cluster" "primary" { name = "tf-test-cl%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -513,6 +515,7 @@ resource "google_container_cluster" "primary" { name = "tf-test-cl%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub, google_project_service.acm] } @@ -809,6 +812,7 @@ resource "google_container_cluster" "primary" { name = "tf-test-cl%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false depends_on = [google_project_service.container, google_project_service.gkehub] } @@ -858,6 +862,7 @@ resource "google_container_cluster" "primary" { name = "tf-test-cl%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false depends_on = [google_project_service.container, google_project_service.gkehub] } @@ -906,6 +911,7 @@ resource "google_container_cluster" "primary" { name = "tf-test-cl%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false depends_on = [google_project_service.container, google_project_service.gkehub] } @@ -954,6 +960,7 @@ resource "google_container_cluster" "primary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -962,6 +969,7 @@ resource "google_container_cluster" "secondary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -970,6 +978,7 @@ resource "google_container_cluster" "tertiary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -979,6 +988,7 @@ resource "google_container_cluster" "quarternary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -1040,6 +1050,7 @@ resource "google_container_cluster" "container_acmoci" { initial_node_count = 1 network = google_compute_network.testnetwork.self_link project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.container, google_project_service.container, google_project_service.gkehub] } diff --git a/google/services/gkehub/resource_gke_hub_membership.go b/google/services/gkehub/resource_gke_hub_membership.go index 21dc24805f4..9ebb4682dae 100644 --- a/google/services/gkehub/resource_gke_hub_membership.go +++ b/google/services/gkehub/resource_gke_hub_membership.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -57,6 +58,11 @@ func ResourceGKEHubMembership() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "membership_id": { Type: schema.TypeString, @@ -117,9 +123,19 @@ this can be '"//container.googleapis.com/${google_container_cluster.my-cluster.i }, }, "labels": { + Type: schema.TypeMap, + Optional: true, + Description: `Labels to apply to this membership. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: `Labels to apply to this membership.`, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "name": { @@ -127,6 +143,13 @@ this can be '"//container.googleapis.com/${google_container_cluster.my-cluster.i Computed: true, Description: `The unique identifier of the membership.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -146,12 +169,6 @@ func resourceGKEHubMembershipCreate(d *schema.ResourceData, meta interface{}) er } obj := make(map[string]interface{}) - labelsProp, err := expandGKEHubMembershipLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } endpointProp, err := expandGKEHubMembershipEndpoint(d.Get("endpoint"), d, config) if err != nil { return err @@ -164,6 +181,12 @@ func resourceGKEHubMembershipCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("authority"); !tpgresource.IsEmptyValue(reflect.ValueOf(authorityProp)) && (ok || !reflect.DeepEqual(v, authorityProp)) { obj["authority"] = authorityProp } + labelsProp, err := expandGKEHubMembershipEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEHubBasePath}}projects/{{project}}/locations/global/memberships?membershipId={{membership_id}}") if err != nil { @@ -285,6 +308,12 @@ func resourceGKEHubMembershipRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("authority", flattenGKEHubMembershipAuthority(res["authority"], d, config)); err != nil { return fmt.Errorf("Error reading Membership: %s", err) } + if err := d.Set("terraform_labels", flattenGKEHubMembershipTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Membership: %s", err) + } + if err := d.Set("effective_labels", flattenGKEHubMembershipEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Membership: %s", err) + } return nil } @@ -305,18 +334,18 @@ func resourceGKEHubMembershipUpdate(d *schema.ResourceData, meta interface{}) er billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandGKEHubMembershipLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } authorityProp, err := expandGKEHubMembershipAuthority(d.Get("authority"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("authority"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, authorityProp)) { obj["authority"] = authorityProp } + labelsProp, err := expandGKEHubMembershipEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEHubBasePath}}projects/{{project}}/locations/global/memberships/{{membership_id}}") if err != nil { @@ -326,13 +355,13 @@ func resourceGKEHubMembershipUpdate(d *schema.ResourceData, meta interface{}) er log.Printf("[DEBUG] Updating Membership %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("authority") { updateMask = append(updateMask, "authority") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -428,9 +457,9 @@ func resourceGKEHubMembershipDelete(d *schema.ResourceData, meta interface{}) er func resourceGKEHubMembershipImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/memberships/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/memberships/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -450,7 +479,18 @@ func flattenGKEHubMembershipName(v interface{}, d *schema.ResourceData, config * } func flattenGKEHubMembershipLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenGKEHubMembershipEndpoint(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -500,15 +540,23 @@ func flattenGKEHubMembershipAuthorityIssuer(v interface{}, d *schema.ResourceDat return v } -func expandGKEHubMembershipLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenGKEHubMembershipTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenGKEHubMembershipEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandGKEHubMembershipEndpoint(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -580,3 +628,14 @@ func expandGKEHubMembershipAuthority(v interface{}, d tpgresource.TerraformResou func expandGKEHubMembershipAuthorityIssuer(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandGKEHubMembershipEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/gkehub/resource_gke_hub_membership_generated_test.go b/google/services/gkehub/resource_gke_hub_membership_generated_test.go index 1c53166e1fd..7174bb05a87 100644 --- a/google/services/gkehub/resource_gke_hub_membership_generated_test.go +++ b/google/services/gkehub/resource_gke_hub_membership_generated_test.go @@ -35,7 +35,8 @@ func TestAccGKEHubMembership_gkehubMembershipBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "random_suffix": acctest.RandString(t, 10), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -50,7 +51,7 @@ func TestAccGKEHubMembership_gkehubMembershipBasicExample(t *testing.T) { ResourceName: "google_gke_hub_membership.membership", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"membership_id"}, + ImportStateVerifyIgnore: []string{"membership_id", "labels", "terraform_labels"}, }, }, }) @@ -62,6 +63,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { @@ -71,6 +73,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } `, context) } @@ -79,8 +85,9 @@ func TestAccGKEHubMembership_gkehubMembershipIssuerExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -95,7 +102,7 @@ func TestAccGKEHubMembership_gkehubMembershipIssuerExample(t *testing.T) { ResourceName: "google_gke_hub_membership.membership", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"membership_id"}, + ImportStateVerifyIgnore: []string{"membership_id", "labels", "terraform_labels"}, }, }, }) @@ -110,6 +117,7 @@ resource "google_container_cluster" "primary" { workload_identity_config { workload_pool = "%{project}.svc.id.goog" } + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "membership" { diff --git a/google/services/gkehub2/resource_gke_hub_feature.go b/google/services/gkehub2/resource_gke_hub_feature.go index 92816999fcb..c74810545f1 100644 --- a/google/services/gkehub2/resource_gke_hub_feature.go +++ b/google/services/gkehub2/resource_gke_hub_feature.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceGKEHub2Feature() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -56,10 +62,13 @@ func ResourceGKEHub2Feature() *schema.Resource { Description: `The location for the resource`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `GCP labels for this Feature.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `GCP labels for this Feature. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "name": { Type: schema.TypeString, @@ -155,6 +164,12 @@ func ResourceGKEHub2Feature() *schema.Resource { Computed: true, Description: `Output only. When the Feature resource was deleted.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "resource_state": { Type: schema.TypeList, Computed: true, @@ -207,6 +222,13 @@ func ResourceGKEHub2Feature() *schema.Resource { }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -231,18 +253,18 @@ func resourceGKEHub2FeatureCreate(d *schema.ResourceData, meta interface{}) erro } obj := make(map[string]interface{}) - labelsProp, err := expandGKEHub2FeatureLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } specProp, err := expandGKEHub2FeatureSpec(d.Get("spec"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("spec"); !tpgresource.IsEmptyValue(reflect.ValueOf(specProp)) && (ok || !reflect.DeepEqual(v, specProp)) { obj["spec"] = specProp } + labelsProp, err := expandGKEHub2FeatureEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEHub2BasePath}}projects/{{project}}/locations/{{location}}/features?featureId={{name}}") if err != nil { @@ -373,6 +395,12 @@ func resourceGKEHub2FeatureRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("delete_time", flattenGKEHub2FeatureDeleteTime(res["deleteTime"], d, config)); err != nil { return fmt.Errorf("Error reading Feature: %s", err) } + if err := d.Set("terraform_labels", flattenGKEHub2FeatureTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Feature: %s", err) + } + if err := d.Set("effective_labels", flattenGKEHub2FeatureEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Feature: %s", err) + } return nil } @@ -393,18 +421,18 @@ func resourceGKEHub2FeatureUpdate(d *schema.ResourceData, meta interface{}) erro billingProject = strings.TrimPrefix(project, "projects/") obj := make(map[string]interface{}) - labelsProp, err := expandGKEHub2FeatureLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } specProp, err := expandGKEHub2FeatureSpec(d.Get("spec"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("spec"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, specProp)) { obj["spec"] = specProp } + labelsProp, err := expandGKEHub2FeatureEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{GKEHub2BasePath}}projects/{{project}}/locations/{{location}}/features/{{name}}") if err != nil { @@ -415,13 +443,13 @@ func resourceGKEHub2FeatureUpdate(d *schema.ResourceData, meta interface{}) erro log.Printf("[DEBUG] Updating Feature %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("spec") { updateMask = append(updateMask, "spec") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -518,9 +546,9 @@ func resourceGKEHub2FeatureDelete(d *schema.ResourceData, meta interface{}) erro func resourceGKEHub2FeatureImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/features/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/features/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -537,7 +565,18 @@ func resourceGKEHub2FeatureImport(d *schema.ResourceData, meta interface{}) ([]* } func flattenGKEHub2FeatureLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenGKEHub2FeatureResourceState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -711,15 +750,23 @@ func flattenGKEHub2FeatureDeleteTime(v interface{}, d *schema.ResourceData, conf return v } -func expandGKEHub2FeatureLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenGKEHub2FeatureTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenGKEHub2FeatureEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandGKEHub2FeatureSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -861,3 +908,14 @@ func expandGKEHub2FeatureSpecFleetobservabilityLoggingConfigFleetScopeLogsConfig func expandGKEHub2FeatureSpecFleetobservabilityLoggingConfigFleetScopeLogsConfigMode(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandGKEHub2FeatureEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/gkehub2/resource_gke_hub_feature_test.go b/google/services/gkehub2/resource_gke_hub_feature_test.go index 05920d6670a..341a529c213 100644 --- a/google/services/gkehub2/resource_gke_hub_feature_test.go +++ b/google/services/gkehub2/resource_gke_hub_feature_test.go @@ -173,7 +173,7 @@ func TestAccGKEHubFeature_gkehubFeatureMciUpdate(t *testing.T) { ResourceName: "google_gke_hub_feature.feature", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"update_time"}, + ImportStateVerifyIgnore: []string{"update_time", "labels", "terraform_labels"}, }, }, }) @@ -187,6 +187,7 @@ resource "google_container_cluster" "primary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -195,6 +196,7 @@ resource "google_container_cluster" "secondary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -238,6 +240,7 @@ resource "google_container_cluster" "primary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -246,6 +249,7 @@ resource "google_container_cluster" "secondary" { location = "us-central1-a" initial_node_count = 1 project = google_project.project.project_id + deletion_protection = false depends_on = [google_project_service.mci, google_project_service.container, google_project_service.container, google_project_service.gkehub] } @@ -308,15 +312,16 @@ func TestAccGKEHubFeature_gkehubFeatureMcsd(t *testing.T) { ResourceName: "google_gke_hub_feature.feature", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"project"}, + ImportStateVerifyIgnore: []string{"project", "labels", "terraform_labels"}, }, { Config: testAccGKEHubFeature_gkehubFeatureMcsdUpdate(context), }, { - ResourceName: "google_gke_hub_feature.feature", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_gke_hub_feature.feature", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/gkehub2/resource_gke_hub_membership_binding.go b/google/services/gkehub2/resource_gke_hub_membership_binding.go index 8c322199a88..a5a44189878 100644 --- a/google/services/gkehub2/resource_gke_hub_membership_binding.go +++ b/google/services/gkehub2/resource_gke_hub_membership_binding.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceGKEHub2MembershipBinding() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -74,10 +80,14 @@ func ResourceGKEHub2MembershipBinding() *schema.Resource { 'projects/*/locations/*/scopes/*'.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels for this Membership binding.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels for this Membership binding. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "create_time": { Type: schema.TypeString, @@ -89,6 +99,12 @@ func ResourceGKEHub2MembershipBinding() *schema.Resource { Computed: true, Description: `Time the MembershipBinding was deleted in UTC.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, @@ -108,6 +124,13 @@ func ResourceGKEHub2MembershipBinding() *schema.Resource { }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -143,10 +166,10 @@ func resourceGKEHub2MembershipBindingCreate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("scope"); !tpgresource.IsEmptyValue(reflect.ValueOf(scopeProp)) && (ok || !reflect.DeepEqual(v, scopeProp)) { obj["scope"] = scopeProp } - labelsProp, err := expandGKEHub2MembershipBindingLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2MembershipBindingEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -282,6 +305,12 @@ func resourceGKEHub2MembershipBindingRead(d *schema.ResourceData, meta interface if err := d.Set("labels", flattenGKEHub2MembershipBindingLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading MembershipBinding: %s", err) } + if err := d.Set("terraform_labels", flattenGKEHub2MembershipBindingTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading MembershipBinding: %s", err) + } + if err := d.Set("effective_labels", flattenGKEHub2MembershipBindingEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading MembershipBinding: %s", err) + } return nil } @@ -308,10 +337,10 @@ func resourceGKEHub2MembershipBindingUpdate(d *schema.ResourceData, meta interfa } else if v, ok := d.GetOkExists("scope"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, scopeProp)) { obj["scope"] = scopeProp } - labelsProp, err := expandGKEHub2MembershipBindingLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2MembershipBindingEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -327,7 +356,7 @@ func resourceGKEHub2MembershipBindingUpdate(d *schema.ResourceData, meta interfa updateMask = append(updateMask, "scope") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -425,9 +454,9 @@ func resourceGKEHub2MembershipBindingDelete(d *schema.ResourceData, meta interfa func resourceGKEHub2MembershipBindingImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/memberships/(?P[^/]+)/bindings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/memberships/(?P[^/]+)/bindings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -487,6 +516,36 @@ func flattenGKEHub2MembershipBindingStateCode(v interface{}, d *schema.ResourceD } func flattenGKEHub2MembershipBindingLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2MembershipBindingTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2MembershipBindingEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -494,7 +553,7 @@ func expandGKEHub2MembershipBindingScope(v interface{}, d tpgresource.TerraformR return v, nil } -func expandGKEHub2MembershipBindingLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandGKEHub2MembershipBindingEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/gkehub2/resource_gke_hub_membership_binding_generated_test.go b/google/services/gkehub2/resource_gke_hub_membership_binding_generated_test.go index 502ca6d5ac4..17d28f27db9 100644 --- a/google/services/gkehub2/resource_gke_hub_membership_binding_generated_test.go +++ b/google/services/gkehub2/resource_gke_hub_membership_binding_generated_test.go @@ -35,9 +35,10 @@ func TestAccGKEHub2MembershipBinding_gkehubMembershipBindingBasicExample(t *test t.Parallel() context := map[string]interface{}{ - "project": envvar.GetTestProjectFromEnv(), - "location": envvar.GetTestRegionFromEnv(), - "random_suffix": acctest.RandString(t, 10), + "project": envvar.GetTestProjectFromEnv(), + "location": envvar.GetTestRegionFromEnv(), + "deletion_protection": false, + "random_suffix": acctest.RandString(t, 10), } acctest.VcrTest(t, resource.TestCase{ @@ -52,7 +53,7 @@ func TestAccGKEHub2MembershipBinding_gkehubMembershipBindingBasicExample(t *test ResourceName: "google_gke_hub_membership_binding.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"membership_binding_id", "scope", "membership_id", "location"}, + ImportStateVerifyIgnore: []string{"membership_binding_id", "scope", "membership_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -64,6 +65,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "%{deletion_protection}" } resource "google_gke_hub_membership" "example" { diff --git a/google/services/gkehub2/resource_gke_hub_membership_binding_test.go b/google/services/gkehub2/resource_gke_hub_membership_binding_test.go index 26f53bccd48..0c793635f79 100644 --- a/google/services/gkehub2/resource_gke_hub_membership_binding_test.go +++ b/google/services/gkehub2/resource_gke_hub_membership_binding_test.go @@ -31,7 +31,7 @@ func TestAccGKEHub2MembershipBinding_gkehubMembershipBindingBasicExample_update( ResourceName: "google_gke_hub_membership_binding.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"membership_binding_id", "scope", "membership_id", "location"}, + ImportStateVerifyIgnore: []string{"membership_binding_id", "scope", "membership_id", "location", "labels", "terraform_labels"}, }, { Config: testAccGKEHub2MembershipBinding_gkehubMembershipBindingBasicExample_update(context), @@ -40,7 +40,7 @@ func TestAccGKEHub2MembershipBinding_gkehubMembershipBindingBasicExample_update( ResourceName: "google_gke_hub_membership_binding.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"membership_binding_id", "scope", "membership_id", "location"}, + ImportStateVerifyIgnore: []string{"membership_binding_id", "scope", "membership_id", "location", "labels", "terraform_labels"}, }, }, }) @@ -52,6 +52,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_gke_hub_membership" "example" { @@ -93,6 +94,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster%{random_suffix}" location = "us-central1-a" initial_node_count = 1 + deletion_protection = false } resource "google_gke_hub_membership" "example" { diff --git a/google/services/gkehub2/resource_gke_hub_namespace.go b/google/services/gkehub2/resource_gke_hub_namespace.go index c0b63776d63..052d87b22a3 100644 --- a/google/services/gkehub2/resource_gke_hub_namespace.go +++ b/google/services/gkehub2/resource_gke_hub_namespace.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceGKEHub2Namespace() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "scope": { Type: schema.TypeString, @@ -68,10 +74,14 @@ func ResourceGKEHub2Namespace() *schema.Resource { Description: `Id of the scope`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels for this Namespace.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels for this Namespace. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "namespace_labels": { Type: schema.TypeMap, @@ -93,6 +103,12 @@ a key. Keys and values must be Kubernetes-conformant.`, Computed: true, Description: `Time the Namespace was deleted in UTC.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, @@ -112,6 +128,13 @@ a key. Keys and values must be Kubernetes-conformant.`, }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -153,10 +176,10 @@ func resourceGKEHub2NamespaceCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("namespace_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(namespaceLabelsProp)) && (ok || !reflect.DeepEqual(v, namespaceLabelsProp)) { obj["namespaceLabels"] = namespaceLabelsProp } - labelsProp, err := expandGKEHub2NamespaceLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2NamespaceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -295,6 +318,12 @@ func resourceGKEHub2NamespaceRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("labels", flattenGKEHub2NamespaceLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Namespace: %s", err) } + if err := d.Set("terraform_labels", flattenGKEHub2NamespaceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Namespace: %s", err) + } + if err := d.Set("effective_labels", flattenGKEHub2NamespaceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Namespace: %s", err) + } return nil } @@ -321,10 +350,10 @@ func resourceGKEHub2NamespaceUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("namespace_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, namespaceLabelsProp)) { obj["namespaceLabels"] = namespaceLabelsProp } - labelsProp, err := expandGKEHub2NamespaceLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2NamespaceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -340,7 +369,7 @@ func resourceGKEHub2NamespaceUpdate(d *schema.ResourceData, meta interface{}) er updateMask = append(updateMask, "namespaceLabels") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -438,9 +467,9 @@ func resourceGKEHub2NamespaceDelete(d *schema.ResourceData, meta interface{}) er func resourceGKEHub2NamespaceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/scopes/(?P[^/]+)/namespaces/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/scopes/(?P[^/]+)/namespaces/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -504,6 +533,36 @@ func flattenGKEHub2NamespaceNamespaceLabels(v interface{}, d *schema.ResourceDat } func flattenGKEHub2NamespaceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2NamespaceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2NamespaceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -522,7 +581,7 @@ func expandGKEHub2NamespaceNamespaceLabels(v interface{}, d tpgresource.Terrafor return m, nil } -func expandGKEHub2NamespaceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandGKEHub2NamespaceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/gkehub2/resource_gke_hub_namespace_generated_test.go b/google/services/gkehub2/resource_gke_hub_namespace_generated_test.go index de281702c2a..b7c53d374f3 100644 --- a/google/services/gkehub2/resource_gke_hub_namespace_generated_test.go +++ b/google/services/gkehub2/resource_gke_hub_namespace_generated_test.go @@ -51,7 +51,7 @@ func TestAccGKEHub2Namespace_gkehubNamespaceBasicExample(t *testing.T) { ResourceName: "google_gke_hub_namespace.namespace", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_namespace_id", "scope", "scope_id", "scope"}, + ImportStateVerifyIgnore: []string{"scope_namespace_id", "scope", "scope_id", "scope", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/gkehub2/resource_gke_hub_namespace_test.go b/google/services/gkehub2/resource_gke_hub_namespace_test.go index 89be6649202..bdc55957031 100644 --- a/google/services/gkehub2/resource_gke_hub_namespace_test.go +++ b/google/services/gkehub2/resource_gke_hub_namespace_test.go @@ -30,7 +30,7 @@ func TestAccGKEHub2Namespace_gkehubNamespaceBasicExample_update(t *testing.T) { ResourceName: "google_gke_hub_namespace.namespace", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_namespace_id", "scope", "scope_id", "scope"}, + ImportStateVerifyIgnore: []string{"scope_namespace_id", "scope", "scope_id", "scope", "labels", "terraform_labels"}, }, { Config: testAccGKEHub2Namespace_gkehubNamespaceBasicExample_update(context), @@ -39,7 +39,7 @@ func TestAccGKEHub2Namespace_gkehubNamespaceBasicExample_update(t *testing.T) { ResourceName: "google_gke_hub_namespace.namespace", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_namespace_id", "scope", "scope_id", "scope"}, + ImportStateVerifyIgnore: []string{"scope_namespace_id", "scope", "scope_id", "scope", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/gkehub2/resource_gke_hub_scope.go b/google/services/gkehub2/resource_gke_hub_scope.go index 7f488210b4f..936d26c2e18 100644 --- a/google/services/gkehub2/resource_gke_hub_scope.go +++ b/google/services/gkehub2/resource_gke_hub_scope.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceGKEHub2Scope() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "scope_id": { Type: schema.TypeString, @@ -55,10 +61,14 @@ func ResourceGKEHub2Scope() *schema.Resource { Description: `The client-provided identifier of the scope.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels for this Scope.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels for this Scope. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "create_time": { Type: schema.TypeString, @@ -70,6 +80,12 @@ func ResourceGKEHub2Scope() *schema.Resource { Computed: true, Description: `Time the Scope was deleted in UTC.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, @@ -89,6 +105,13 @@ func ResourceGKEHub2Scope() *schema.Resource { }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -118,10 +141,10 @@ func resourceGKEHub2ScopeCreate(d *schema.ResourceData, meta interface{}) error } obj := make(map[string]interface{}) - labelsProp, err := expandGKEHub2ScopeLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2ScopeEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -254,6 +277,12 @@ func resourceGKEHub2ScopeRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("labels", flattenGKEHub2ScopeLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Scope: %s", err) } + if err := d.Set("terraform_labels", flattenGKEHub2ScopeTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Scope: %s", err) + } + if err := d.Set("effective_labels", flattenGKEHub2ScopeEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Scope: %s", err) + } return nil } @@ -274,10 +303,10 @@ func resourceGKEHub2ScopeUpdate(d *schema.ResourceData, meta interface{}) error billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandGKEHub2ScopeLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2ScopeEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -289,7 +318,7 @@ func resourceGKEHub2ScopeUpdate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Updating Scope %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -387,9 +416,9 @@ func resourceGKEHub2ScopeDelete(d *schema.ResourceData, meta interface{}) error func resourceGKEHub2ScopeImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/scopes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/scopes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -442,10 +471,40 @@ func flattenGKEHub2ScopeStateCode(v interface{}, d *schema.ResourceData, config } func flattenGKEHub2ScopeLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2ScopeTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2ScopeEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } -func expandGKEHub2ScopeLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandGKEHub2ScopeEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/gkehub2/resource_gke_hub_scope_generated_test.go b/google/services/gkehub2/resource_gke_hub_scope_generated_test.go index 3a14907cce8..d20ecb4d96d 100644 --- a/google/services/gkehub2/resource_gke_hub_scope_generated_test.go +++ b/google/services/gkehub2/resource_gke_hub_scope_generated_test.go @@ -51,7 +51,7 @@ func TestAccGKEHub2Scope_gkehubScopeBasicExample(t *testing.T) { ResourceName: "google_gke_hub_scope.scope", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_id"}, + ImportStateVerifyIgnore: []string{"scope_id", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go b/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go index 993de4b1415..31479c44875 100644 --- a/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go +++ b/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceGKEHub2ScopeRBACRoleBinding() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "role": { Type: schema.TypeList, @@ -86,10 +92,14 @@ group is the group, as seen by the kubernetes cluster.`, ExactlyOneOf: []string{"user", "group"}, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels for this ScopeRBACRoleBinding.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels for this ScopeRBACRoleBinding. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "user": { Type: schema.TypeString, @@ -110,6 +120,12 @@ user is the name of the user as seen by the kubernetes cluster, example Computed: true, Description: `Time the RBAC Role Binding was deleted in UTC.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, @@ -129,6 +145,13 @@ user is the name of the user as seen by the kubernetes cluster, example }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "uid": { Type: schema.TypeString, Computed: true, @@ -176,10 +199,10 @@ func resourceGKEHub2ScopeRBACRoleBindingCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("role"); !tpgresource.IsEmptyValue(reflect.ValueOf(roleProp)) && (ok || !reflect.DeepEqual(v, roleProp)) { obj["role"] = roleProp } - labelsProp, err := expandGKEHub2ScopeRBACRoleBindingLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2ScopeRBACRoleBindingEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -321,6 +344,12 @@ func resourceGKEHub2ScopeRBACRoleBindingRead(d *schema.ResourceData, meta interf if err := d.Set("labels", flattenGKEHub2ScopeRBACRoleBindingLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading ScopeRBACRoleBinding: %s", err) } + if err := d.Set("terraform_labels", flattenGKEHub2ScopeRBACRoleBindingTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ScopeRBACRoleBinding: %s", err) + } + if err := d.Set("effective_labels", flattenGKEHub2ScopeRBACRoleBindingEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ScopeRBACRoleBinding: %s", err) + } return nil } @@ -359,10 +388,10 @@ func resourceGKEHub2ScopeRBACRoleBindingUpdate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("role"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, roleProp)) { obj["role"] = roleProp } - labelsProp, err := expandGKEHub2ScopeRBACRoleBindingLabels(d.Get("labels"), d, config) + labelsProp, err := expandGKEHub2ScopeRBACRoleBindingEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -386,7 +415,7 @@ func resourceGKEHub2ScopeRBACRoleBindingUpdate(d *schema.ResourceData, meta inte updateMask = append(updateMask, "role") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -484,9 +513,9 @@ func resourceGKEHub2ScopeRBACRoleBindingDelete(d *schema.ResourceData, meta inte func resourceGKEHub2ScopeRBACRoleBindingImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/scopes/(?P[^/]+)/rbacrolebindings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/scopes/(?P[^/]+)/rbacrolebindings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -564,6 +593,36 @@ func flattenGKEHub2ScopeRBACRoleBindingRolePredefinedRole(v interface{}, d *sche } func flattenGKEHub2ScopeRBACRoleBindingLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2ScopeRBACRoleBindingTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenGKEHub2ScopeRBACRoleBindingEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -598,7 +657,7 @@ func expandGKEHub2ScopeRBACRoleBindingRolePredefinedRole(v interface{}, d tpgres return v, nil } -func expandGKEHub2ScopeRBACRoleBindingLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandGKEHub2ScopeRBACRoleBindingEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_generated_test.go b/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_generated_test.go index b377f60e614..9fbd8142100 100644 --- a/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_generated_test.go +++ b/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_generated_test.go @@ -51,7 +51,7 @@ func TestAccGKEHub2ScopeRBACRoleBinding_gkehubScopeRbacRoleBindingBasicExample(t ResourceName: "google_gke_hub_scope_rbac_role_binding.scoperbacrolebinding", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_rbac_role_binding_id", "scope_id"}, + ImportStateVerifyIgnore: []string{"scope_rbac_role_binding_id", "scope_id", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_test.go b/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_test.go index fa4aa066601..5472fc5f638 100644 --- a/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_test.go +++ b/google/services/gkehub2/resource_gke_hub_scope_rbac_role_binding_test.go @@ -30,7 +30,7 @@ func TestAccGKEHub2ScopeRBACRoleBinding_gkehubScopeRbacRoleBindingBasicExample_u ResourceName: "google_gke_hub_scope_rbac_role_binding.scoperbacrolebinding", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_rbac_role_binding_id", "scope_id"}, + ImportStateVerifyIgnore: []string{"scope_rbac_role_binding_id", "scope_id", "labels", "terraform_labels"}, }, { Config: testAccGKEHub2ScopeRBACRoleBinding_gkehubScopeRbacRoleBindingBasicExample_update(context), @@ -39,7 +39,7 @@ func TestAccGKEHub2ScopeRBACRoleBinding_gkehubScopeRbacRoleBindingBasicExample_u ResourceName: "google_gke_hub_scope_rbac_role_binding.scoperbacrolebinding", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_rbac_role_binding_id", "scope_id"}, + ImportStateVerifyIgnore: []string{"scope_rbac_role_binding_id", "scope_id", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/gkehub2/resource_gke_hub_scope_test.go b/google/services/gkehub2/resource_gke_hub_scope_test.go index a2641b822d6..30dc3e4111e 100644 --- a/google/services/gkehub2/resource_gke_hub_scope_test.go +++ b/google/services/gkehub2/resource_gke_hub_scope_test.go @@ -30,7 +30,7 @@ func TestAccGKEHub2Scope_gkehubScopeBasicExample_update(t *testing.T) { ResourceName: "google_gke_hub_scope.scope", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_id"}, + ImportStateVerifyIgnore: []string{"scope_id", "labels", "terraform_labels"}, }, { Config: testAccGKEHub2Scope_gkehubScopeBasicExample_update(context), @@ -39,7 +39,7 @@ func TestAccGKEHub2Scope_gkehubScopeBasicExample_update(t *testing.T) { ResourceName: "google_gke_hub_scope.scope", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"scope_id"}, + ImportStateVerifyIgnore: []string{"scope_id", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_consent_store.go b/google/services/healthcare/resource_healthcare_consent_store.go index 8a976880023..14f4b5de70e 100644 --- a/google/services/healthcare/resource_healthcare_consent_store.go +++ b/google/services/healthcare/resource_healthcare_consent_store.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceHealthcareConsentStore() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "dataset": { Type: schema.TypeString, @@ -89,7 +94,24 @@ bytes, and must conform to the following PCRE regular expression: '[\p{Ll}\p{Lo} No more than 64 labels can be associated with a given store. An object containing a list of "key": value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, Elem: &schema.Schema{Type: schema.TypeString}, }, }, @@ -117,10 +139,10 @@ func resourceHealthcareConsentStoreCreate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("enable_consent_create_on_update"); !tpgresource.IsEmptyValue(reflect.ValueOf(enableConsentCreateOnUpdateProp)) && (ok || !reflect.DeepEqual(v, enableConsentCreateOnUpdateProp)) { obj["enableConsentCreateOnUpdate"] = enableConsentCreateOnUpdateProp } - labelsProp, err := expandHealthcareConsentStoreLabels(d.Get("labels"), d, config) + labelsProp, err := expandHealthcareConsentStoreEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -201,6 +223,12 @@ func resourceHealthcareConsentStoreRead(d *schema.ResourceData, meta interface{} if err := d.Set("labels", flattenHealthcareConsentStoreLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading ConsentStore: %s", err) } + if err := d.Set("terraform_labels", flattenHealthcareConsentStoreTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConsentStore: %s", err) + } + if err := d.Set("effective_labels", flattenHealthcareConsentStoreEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConsentStore: %s", err) + } return nil } @@ -227,10 +255,10 @@ func resourceHealthcareConsentStoreUpdate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("enable_consent_create_on_update"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, enableConsentCreateOnUpdateProp)) { obj["enableConsentCreateOnUpdate"] = enableConsentCreateOnUpdateProp } - labelsProp, err := expandHealthcareConsentStoreLabels(d.Get("labels"), d, config) + labelsProp, err := expandHealthcareConsentStoreEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -250,7 +278,7 @@ func resourceHealthcareConsentStoreUpdate(d *schema.ResourceData, meta interface updateMask = append(updateMask, "enableConsentCreateOnUpdate") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -326,7 +354,7 @@ func resourceHealthcareConsentStoreDelete(d *schema.ResourceData, meta interface func resourceHealthcareConsentStoreImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/consentStores/(?P[^/]+)", + "^(?P.+)/consentStores/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -350,6 +378,36 @@ func flattenHealthcareConsentStoreEnableConsentCreateOnUpdate(v interface{}, d * } func flattenHealthcareConsentStoreLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenHealthcareConsentStoreTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenHealthcareConsentStoreEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -361,7 +419,7 @@ func expandHealthcareConsentStoreEnableConsentCreateOnUpdate(v interface{}, d tp return v, nil } -func expandHealthcareConsentStoreLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandHealthcareConsentStoreEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/healthcare/resource_healthcare_consent_store_generated_test.go b/google/services/healthcare/resource_healthcare_consent_store_generated_test.go index 9214e5c302d..1c6f055bff9 100644 --- a/google/services/healthcare/resource_healthcare_consent_store_generated_test.go +++ b/google/services/healthcare/resource_healthcare_consent_store_generated_test.go @@ -49,7 +49,7 @@ func TestAccHealthcareConsentStore_healthcareConsentStoreBasicExample(t *testing ResourceName: "google_healthcare_consent_store.my-consent", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "dataset"}, + ImportStateVerifyIgnore: []string{"name", "dataset", "labels", "terraform_labels"}, }, }, }) @@ -88,7 +88,7 @@ func TestAccHealthcareConsentStore_healthcareConsentStoreFullExample(t *testing. ResourceName: "google_healthcare_consent_store.my-consent", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "dataset"}, + ImportStateVerifyIgnore: []string{"name", "dataset", "labels", "terraform_labels"}, }, }, }) @@ -135,7 +135,7 @@ func TestAccHealthcareConsentStore_healthcareConsentStoreIamExample(t *testing.T ResourceName: "google_healthcare_consent_store.my-consent", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "dataset"}, + ImportStateVerifyIgnore: []string{"name", "dataset", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_dataset.go b/google/services/healthcare/resource_healthcare_dataset.go index 59e29853c30..0e49107261c 100644 --- a/google/services/healthcare/resource_healthcare_dataset.go +++ b/google/services/healthcare/resource_healthcare_dataset.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceHealthcareDataset() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -328,9 +333,9 @@ func resourceHealthcareDatasetDelete(d *schema.ResourceData, meta interface{}) e func resourceHealthcareDatasetImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/datasets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/datasets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/healthcare/resource_healthcare_dicom_store.go b/google/services/healthcare/resource_healthcare_dicom_store.go index 74b4349a5db..56b117ab63a 100644 --- a/google/services/healthcare/resource_healthcare_dicom_store.go +++ b/google/services/healthcare/resource_healthcare_dicom_store.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceHealthcareDicomStore() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "dataset": { Type: schema.TypeString, @@ -78,7 +83,11 @@ bytes, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\ No more than 64 labels can be associated with a given store. An object containing a list of "key": value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "notification_config": { @@ -101,11 +110,24 @@ Cloud Pub/Sub topic. Not having adequate permissions will cause the calls that s }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "self_link": { Type: schema.TypeString, Computed: true, Description: `The fully qualified name of this dataset`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, UseJSONNumber: true, } @@ -125,18 +147,18 @@ func resourceHealthcareDicomStoreCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("name"); !tpgresource.IsEmptyValue(reflect.ValueOf(nameProp)) && (ok || !reflect.DeepEqual(v, nameProp)) { obj["name"] = nameProp } - labelsProp, err := expandHealthcareDicomStoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } notificationConfigProp, err := expandHealthcareDicomStoreNotificationConfig(d.Get("notification_config"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("notification_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(notificationConfigProp)) && (ok || !reflect.DeepEqual(v, notificationConfigProp)) { obj["notificationConfig"] = notificationConfigProp } + labelsProp, err := expandHealthcareDicomStoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{HealthcareBasePath}}{{dataset}}/dicomStores?dicomStoreId={{name}}") if err != nil { @@ -227,6 +249,12 @@ func resourceHealthcareDicomStoreRead(d *schema.ResourceData, meta interface{}) if err := d.Set("notification_config", flattenHealthcareDicomStoreNotificationConfig(res["notificationConfig"], d, config)); err != nil { return fmt.Errorf("Error reading DicomStore: %s", err) } + if err := d.Set("terraform_labels", flattenHealthcareDicomStoreTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading DicomStore: %s", err) + } + if err := d.Set("effective_labels", flattenHealthcareDicomStoreEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading DicomStore: %s", err) + } return nil } @@ -241,18 +269,18 @@ func resourceHealthcareDicomStoreUpdate(d *schema.ResourceData, meta interface{} billingProject := "" obj := make(map[string]interface{}) - labelsProp, err := expandHealthcareDicomStoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } notificationConfigProp, err := expandHealthcareDicomStoreNotificationConfig(d.Get("notification_config"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("notification_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, notificationConfigProp)) { obj["notificationConfig"] = notificationConfigProp } + labelsProp, err := expandHealthcareDicomStoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{HealthcareBasePath}}{{dataset}}/dicomStores/{{name}}") if err != nil { @@ -262,13 +290,13 @@ func resourceHealthcareDicomStoreUpdate(d *schema.ResourceData, meta interface{} log.Printf("[DEBUG] Updating DicomStore %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("notification_config") { updateMask = append(updateMask, "notificationConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -363,7 +391,18 @@ func flattenHealthcareDicomStoreName(v interface{}, d *schema.ResourceData, conf } func flattenHealthcareDicomStoreLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenHealthcareDicomStoreNotificationConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -383,19 +422,27 @@ func flattenHealthcareDicomStoreNotificationConfigPubsubTopic(v interface{}, d * return v } -func expandHealthcareDicomStoreName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandHealthcareDicomStoreLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenHealthcareDicomStoreTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenHealthcareDicomStoreEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandHealthcareDicomStoreName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandHealthcareDicomStoreNotificationConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -421,6 +468,17 @@ func expandHealthcareDicomStoreNotificationConfigPubsubTopic(v interface{}, d tp return v, nil } +func expandHealthcareDicomStoreEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceHealthcareDicomStoreDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { // Take the returned long form of the name and use it as `self_link`. // Then modify the name to be the user specified form. diff --git a/google/services/healthcare/resource_healthcare_dicom_store_generated_test.go b/google/services/healthcare/resource_healthcare_dicom_store_generated_test.go index edea58b8af2..3adf6c21e75 100644 --- a/google/services/healthcare/resource_healthcare_dicom_store_generated_test.go +++ b/google/services/healthcare/resource_healthcare_dicom_store_generated_test.go @@ -49,7 +49,7 @@ func TestAccHealthcareDicomStore_healthcareDicomStoreBasicExample(t *testing.T) ResourceName: "google_healthcare_dicom_store.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_link", "dataset"}, + ImportStateVerifyIgnore: []string{"self_link", "dataset", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_dicom_store_test.go b/google/services/healthcare/resource_healthcare_dicom_store_test.go index e9f50be5c81..48bf372d7e9 100644 --- a/google/services/healthcare/resource_healthcare_dicom_store_test.go +++ b/google/services/healthcare/resource_healthcare_dicom_store_test.go @@ -93,9 +93,10 @@ func TestAccHealthcareDicomStore_basic(t *testing.T) { Config: testGoogleHealthcareDicomStore_basic(dicomStoreName, datasetName), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleHealthcareDicomStore_update(dicomStoreName, datasetName, pubsubTopic), @@ -104,17 +105,19 @@ func TestAccHealthcareDicomStore_basic(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleHealthcareDicomStore_basic(dicomStoreName, datasetName), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_fhir_store.go b/google/services/healthcare/resource_healthcare_fhir_store.go index 93c43b87eac..d6f2637e9c2 100644 --- a/google/services/healthcare/resource_healthcare_fhir_store.go +++ b/google/services/healthcare/resource_healthcare_fhir_store.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceHealthcareFhirStore() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "dataset": { Type: schema.TypeString, @@ -147,7 +152,11 @@ bytes, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\ No more than 64 labels can be associated with a given store. An object containing a list of "key": value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "notification_config": { @@ -263,11 +272,24 @@ an empty list as an intent to stream all the supported resource types in this FH }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "self_link": { Type: schema.TypeString, Computed: true, Description: `The fully qualified name of this dataset`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, UseJSONNumber: true, } @@ -323,12 +345,6 @@ func resourceHealthcareFhirStoreCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("enable_history_import"); !tpgresource.IsEmptyValue(reflect.ValueOf(enableHistoryImportProp)) && (ok || !reflect.DeepEqual(v, enableHistoryImportProp)) { obj["enableHistoryImport"] = enableHistoryImportProp } - labelsProp, err := expandHealthcareFhirStoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } notificationConfigProp, err := expandHealthcareFhirStoreNotificationConfig(d.Get("notification_config"), d, config) if err != nil { return err @@ -347,6 +363,12 @@ func resourceHealthcareFhirStoreCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("default_search_handling_strict"); !tpgresource.IsEmptyValue(reflect.ValueOf(defaultSearchHandlingStrictProp)) && (ok || !reflect.DeepEqual(v, defaultSearchHandlingStrictProp)) { obj["defaultSearchHandlingStrict"] = defaultSearchHandlingStrictProp } + labelsProp, err := expandHealthcareFhirStoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{HealthcareBasePath}}{{dataset}}/fhirStores?fhirStoreId={{name}}") if err != nil { @@ -461,6 +483,12 @@ func resourceHealthcareFhirStoreRead(d *schema.ResourceData, meta interface{}) e if err := d.Set("default_search_handling_strict", flattenHealthcareFhirStoreDefaultSearchHandlingStrict(res["defaultSearchHandlingStrict"], d, config)); err != nil { return fmt.Errorf("Error reading FhirStore: %s", err) } + if err := d.Set("terraform_labels", flattenHealthcareFhirStoreTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading FhirStore: %s", err) + } + if err := d.Set("effective_labels", flattenHealthcareFhirStoreEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading FhirStore: %s", err) + } return nil } @@ -487,12 +515,6 @@ func resourceHealthcareFhirStoreUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("enable_update_create"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, enableUpdateCreateProp)) { obj["enableUpdateCreate"] = enableUpdateCreateProp } - labelsProp, err := expandHealthcareFhirStoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } notificationConfigProp, err := expandHealthcareFhirStoreNotificationConfig(d.Get("notification_config"), d, config) if err != nil { return err @@ -511,6 +533,12 @@ func resourceHealthcareFhirStoreUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("default_search_handling_strict"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, defaultSearchHandlingStrictProp)) { obj["defaultSearchHandlingStrict"] = defaultSearchHandlingStrictProp } + labelsProp, err := expandHealthcareFhirStoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{HealthcareBasePath}}{{dataset}}/fhirStores/{{name}}") if err != nil { @@ -528,10 +556,6 @@ func resourceHealthcareFhirStoreUpdate(d *schema.ResourceData, meta interface{}) updateMask = append(updateMask, "enableUpdateCreate") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("notification_config") { updateMask = append(updateMask, "notificationConfig") } @@ -543,6 +567,10 @@ func resourceHealthcareFhirStoreUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("default_search_handling_strict") { updateMask = append(updateMask, "defaultSearchHandlingStrict") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -661,7 +689,18 @@ func flattenHealthcareFhirStoreEnableHistoryImport(v interface{}, d *schema.Reso } func flattenHealthcareFhirStoreLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenHealthcareFhirStoreNotificationConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -788,6 +827,25 @@ func flattenHealthcareFhirStoreDefaultSearchHandlingStrict(v interface{}, d *sch return v } +func flattenHealthcareFhirStoreTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenHealthcareFhirStoreEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandHealthcareFhirStoreName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -816,17 +874,6 @@ func expandHealthcareFhirStoreEnableHistoryImport(v interface{}, d tpgresource.T return v, nil } -func expandHealthcareFhirStoreLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandHealthcareFhirStoreNotificationConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -992,6 +1039,17 @@ func expandHealthcareFhirStoreDefaultSearchHandlingStrict(v interface{}, d tpgre return v, nil } +func expandHealthcareFhirStoreEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceHealthcareFhirStoreDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { // Take the returned long form of the name and use it as `self_link`. // Then modify the name to be the user specified form. diff --git a/google/services/healthcare/resource_healthcare_fhir_store_generated_test.go b/google/services/healthcare/resource_healthcare_fhir_store_generated_test.go index 11a9270d3bc..28a70767581 100644 --- a/google/services/healthcare/resource_healthcare_fhir_store_generated_test.go +++ b/google/services/healthcare/resource_healthcare_fhir_store_generated_test.go @@ -49,7 +49,7 @@ func TestAccHealthcareFhirStore_healthcareFhirStoreBasicExample(t *testing.T) { ResourceName: "google_healthcare_fhir_store.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_link", "dataset"}, + ImportStateVerifyIgnore: []string{"self_link", "dataset", "labels", "terraform_labels"}, }, }, }) @@ -109,7 +109,7 @@ func TestAccHealthcareFhirStore_healthcareFhirStoreStreamingConfigExample(t *tes ResourceName: "google_healthcare_fhir_store.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_link", "dataset"}, + ImportStateVerifyIgnore: []string{"self_link", "dataset", "labels", "terraform_labels"}, }, }, }) @@ -184,7 +184,7 @@ func TestAccHealthcareFhirStore_healthcareFhirStoreNotificationConfigExample(t * ResourceName: "google_healthcare_fhir_store.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_link", "dataset"}, + ImportStateVerifyIgnore: []string{"self_link", "dataset", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_fhir_store_test.go b/google/services/healthcare/resource_healthcare_fhir_store_test.go index 89db47adbd0..cd712f89ccf 100644 --- a/google/services/healthcare/resource_healthcare_fhir_store_test.go +++ b/google/services/healthcare/resource_healthcare_fhir_store_test.go @@ -93,9 +93,10 @@ func TestAccHealthcareFhirStore_basic(t *testing.T) { Config: testGoogleHealthcareFhirStore_basic(fhirStoreName, datasetName), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleHealthcareFhirStore_update(fhirStoreName, datasetName, pubsubTopic), @@ -104,17 +105,19 @@ func TestAccHealthcareFhirStore_basic(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleHealthcareFhirStore_basic(fhirStoreName, datasetName), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_hl7_v2_store.go b/google/services/healthcare/resource_healthcare_hl7_v2_store.go index a4162366349..00a1fb9b7eb 100644 --- a/google/services/healthcare/resource_healthcare_hl7_v2_store.go +++ b/google/services/healthcare/resource_healthcare_hl7_v2_store.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -51,6 +52,10 @@ func ResourceHealthcareHl7V2Store() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "dataset": { Type: schema.TypeString, @@ -82,7 +87,11 @@ bytes, and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\ No more than 64 labels can be associated with a given store. An object containing a list of "key": value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "notification_config": { @@ -186,11 +195,24 @@ A base64-encoded string.`, }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "self_link": { Type: schema.TypeString, Computed: true, Description: `The fully qualified name of this dataset`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, UseJSONNumber: true, } @@ -216,12 +238,6 @@ func resourceHealthcareHl7V2StoreCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("parser_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(parserConfigProp)) && (ok || !reflect.DeepEqual(v, parserConfigProp)) { obj["parserConfig"] = parserConfigProp } - labelsProp, err := expandHealthcareHl7V2StoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } notificationConfigsProp, err := expandHealthcareHl7V2StoreNotificationConfigs(d.Get("notification_configs"), d, config) if err != nil { return err @@ -234,6 +250,12 @@ func resourceHealthcareHl7V2StoreCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("notification_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(notificationConfigProp)) && (ok || !reflect.DeepEqual(v, notificationConfigProp)) { obj["notificationConfig"] = notificationConfigProp } + labelsProp, err := expandHealthcareHl7V2StoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{HealthcareBasePath}}{{dataset}}/hl7V2Stores?hl7V2StoreId={{name}}") if err != nil { @@ -330,6 +352,12 @@ func resourceHealthcareHl7V2StoreRead(d *schema.ResourceData, meta interface{}) if err := d.Set("notification_config", flattenHealthcareHl7V2StoreNotificationConfig(res["notificationConfig"], d, config)); err != nil { return fmt.Errorf("Error reading Hl7V2Store: %s", err) } + if err := d.Set("terraform_labels", flattenHealthcareHl7V2StoreTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Hl7V2Store: %s", err) + } + if err := d.Set("effective_labels", flattenHealthcareHl7V2StoreEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Hl7V2Store: %s", err) + } return nil } @@ -350,12 +378,6 @@ func resourceHealthcareHl7V2StoreUpdate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("parser_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, parserConfigProp)) { obj["parserConfig"] = parserConfigProp } - labelsProp, err := expandHealthcareHl7V2StoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } notificationConfigsProp, err := expandHealthcareHl7V2StoreNotificationConfigs(d.Get("notification_configs"), d, config) if err != nil { return err @@ -368,6 +390,12 @@ func resourceHealthcareHl7V2StoreUpdate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("notification_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, notificationConfigProp)) { obj["notificationConfig"] = notificationConfigProp } + labelsProp, err := expandHealthcareHl7V2StoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{HealthcareBasePath}}{{dataset}}/hl7V2Stores/{{name}}") if err != nil { @@ -383,10 +411,6 @@ func resourceHealthcareHl7V2StoreUpdate(d *schema.ResourceData, meta interface{} "parser_config.schema") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("notification_configs") { updateMask = append(updateMask, "notificationConfigs") } @@ -394,6 +418,10 @@ func resourceHealthcareHl7V2StoreUpdate(d *schema.ResourceData, meta interface{} if d.HasChange("notification_config") { updateMask = append(updateMask, "notificationConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -531,7 +559,18 @@ func flattenHealthcareHl7V2StoreParserConfigVersion(v interface{}, d *schema.Res } func flattenHealthcareHl7V2StoreLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenHealthcareHl7V2StoreNotificationConfigs(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -578,6 +617,25 @@ func flattenHealthcareHl7V2StoreNotificationConfigPubsubTopic(v interface{}, d * return v } +func flattenHealthcareHl7V2StoreTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenHealthcareHl7V2StoreEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandHealthcareHl7V2StoreName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -646,17 +704,6 @@ func expandHealthcareHl7V2StoreParserConfigVersion(v interface{}, d tpgresource. return v, nil } -func expandHealthcareHl7V2StoreLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandHealthcareHl7V2StoreNotificationConfigs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) req := make([]interface{}, 0, len(l)) @@ -717,6 +764,17 @@ func expandHealthcareHl7V2StoreNotificationConfigPubsubTopic(v interface{}, d tp return v, nil } +func expandHealthcareHl7V2StoreEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceHealthcareHl7V2StoreDecoder(d *schema.ResourceData, meta interface{}, res map[string]interface{}) (map[string]interface{}, error) { // Take the returned long form of the name and use it as `self_link`. // Then modify the name to be the user specified form. diff --git a/google/services/healthcare/resource_healthcare_hl7_v2_store_generated_test.go b/google/services/healthcare/resource_healthcare_hl7_v2_store_generated_test.go index a88bebb96c0..7e89baadeb1 100644 --- a/google/services/healthcare/resource_healthcare_hl7_v2_store_generated_test.go +++ b/google/services/healthcare/resource_healthcare_hl7_v2_store_generated_test.go @@ -49,7 +49,7 @@ func TestAccHealthcareHl7V2Store_healthcareHl7V2StoreBasicExample(t *testing.T) ResourceName: "google_healthcare_hl7_v2_store.store", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"self_link", "dataset"}, + ImportStateVerifyIgnore: []string{"self_link", "dataset", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/healthcare/resource_healthcare_hl7_v2_store_test.go b/google/services/healthcare/resource_healthcare_hl7_v2_store_test.go index eea06ce571a..512693df271 100644 --- a/google/services/healthcare/resource_healthcare_hl7_v2_store_test.go +++ b/google/services/healthcare/resource_healthcare_hl7_v2_store_test.go @@ -93,9 +93,10 @@ func TestAccHealthcareHl7V2Store_basic(t *testing.T) { Config: testGoogleHealthcareHl7V2Store_basic(hl7_v2StoreName, datasetName), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleHealthcareHl7V2Store_update(hl7_v2StoreName, datasetName, pubsubTopic), @@ -104,17 +105,19 @@ func TestAccHealthcareHl7V2Store_basic(t *testing.T) { ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleHealthcareHl7V2Store_basic(hl7_v2StoreName, datasetName), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/iam2/resource_iam_access_boundary_policy.go b/google/services/iam2/resource_iam_access_boundary_policy.go index 8d449a69401..cb5052341a1 100644 --- a/google/services/iam2/resource_iam_access_boundary_policy.go +++ b/google/services/iam2/resource_iam_access_boundary_policy.go @@ -380,7 +380,7 @@ func resourceIAM2AccessBoundaryPolicyDelete(d *schema.ResourceData, meta interfa func resourceIAM2AccessBoundaryPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/(?P[^/]+)", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/iam2/resource_iam_deny_policy.go b/google/services/iam2/resource_iam_deny_policy.go index 34968070e24..b745bdaa85f 100644 --- a/google/services/iam2/resource_iam_deny_policy.go +++ b/google/services/iam2/resource_iam_deny_policy.go @@ -403,7 +403,7 @@ func resourceIAM2DenyPolicyDelete(d *schema.ResourceData, meta interface{}) erro func resourceIAM2DenyPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/(?P[^/]+)", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/iambeta/resource_iam_workload_identity_pool.go b/google/services/iambeta/resource_iam_workload_identity_pool.go index 49cc0389220..ceed525c50c 100644 --- a/google/services/iambeta/resource_iam_workload_identity_pool.go +++ b/google/services/iambeta/resource_iam_workload_identity_pool.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -76,6 +77,10 @@ func ResourceIAMBetaWorkloadIdentityPool() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "workload_identity_pool_id": { Type: schema.TypeString, @@ -434,9 +439,9 @@ func resourceIAMBetaWorkloadIdentityPoolDelete(d *schema.ResourceData, meta inte func resourceIAMBetaWorkloadIdentityPoolImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/workloadIdentityPools/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/workloadIdentityPools/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/iambeta/resource_iam_workload_identity_pool_provider.go b/google/services/iambeta/resource_iam_workload_identity_pool_provider.go index 5a2487aef1e..50979f88abc 100644 --- a/google/services/iambeta/resource_iam_workload_identity_pool_provider.go +++ b/google/services/iambeta/resource_iam_workload_identity_pool_provider.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -76,6 +77,10 @@ func ResourceIAMBetaWorkloadIdentityPoolProvider() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "workload_identity_pool_id": { Type: schema.TypeString, @@ -683,9 +688,9 @@ func resourceIAMBetaWorkloadIdentityPoolProviderDelete(d *schema.ResourceData, m func resourceIAMBetaWorkloadIdentityPoolProviderImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/workloadIdentityPools/(?P[^/]+)/providers/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/workloadIdentityPools/(?P[^/]+)/providers/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/iamworkforcepool/resource_iam_workforce_pool.go b/google/services/iamworkforcepool/resource_iam_workforce_pool.go index 9bb3edc80a6..848b0cc8781 100644 --- a/google/services/iamworkforcepool/resource_iam_workforce_pool.go +++ b/google/services/iamworkforcepool/resource_iam_workforce_pool.go @@ -443,8 +443,8 @@ func resourceIAMWorkforcePoolWorkforcePoolDelete(d *schema.ResourceData, meta in func resourceIAMWorkforcePoolWorkforcePoolImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "locations/(?P[^/]+)/workforcePools/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^locations/(?P[^/]+)/workforcePools/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/iamworkforcepool/resource_iam_workforce_pool_provider.go b/google/services/iamworkforcepool/resource_iam_workforce_pool_provider.go index 33ca036a830..c24c2006bfc 100644 --- a/google/services/iamworkforcepool/resource_iam_workforce_pool_provider.go +++ b/google/services/iamworkforcepool/resource_iam_workforce_pool_provider.go @@ -718,8 +718,8 @@ func resourceIAMWorkforcePoolWorkforcePoolProviderDelete(d *schema.ResourceData, func resourceIAMWorkforcePoolWorkforcePoolProviderImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "locations/(?P[^/]+)/workforcePools/(?P[^/]+)/providers/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^locations/(?P[^/]+)/workforcePools/(?P[^/]+)/providers/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/iap/data_source_iap_client.go b/google/services/iap/data_source_iap_client.go index 32ee9a0ced9..d81c692431a 100644 --- a/google/services/iap/data_source_iap_client.go +++ b/google/services/iap/data_source_iap_client.go @@ -29,5 +29,13 @@ func dataSourceGoogleIapClientRead(d *schema.ResourceData, meta interface{}) err return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceIapClientRead(d, meta) + err = resourceIapClientRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/iap/resource_iap_brand.go b/google/services/iap/resource_iap_brand.go index 8d9bb3ccdff..e595ea6416a 100644 --- a/google/services/iap/resource_iap_brand.go +++ b/google/services/iap/resource_iap_brand.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -45,6 +46,10 @@ func ResourceIapBrand() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "application_title": { Type: schema.TypeString, diff --git a/google/services/identityplatform/resource_identity_platform_config.go b/google/services/identityplatform/resource_identity_platform_config.go index ccc97f9cd8f..4703923f911 100644 --- a/google/services/identityplatform/resource_identity_platform_config.go +++ b/google/services/identityplatform/resource_identity_platform_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "authorized_domains": { Type: schema.TypeList, @@ -515,9 +520,9 @@ func resourceIdentityPlatformConfigDelete(d *schema.ResourceData, meta interface func resourceIdentityPlatformConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/config", - "projects/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/config$", + "^projects/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_default_supported_idp_config.go b/google/services/identityplatform/resource_identity_platform_default_supported_idp_config.go index 8a69e1c8bc2..bd1a2f38dae 100644 --- a/google/services/identityplatform/resource_identity_platform_default_supported_idp_config.go +++ b/google/services/identityplatform/resource_identity_platform_default_supported_idp_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformDefaultSupportedIdpConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "client_id": { Type: schema.TypeString, @@ -368,9 +373,9 @@ func resourceIdentityPlatformDefaultSupportedIdpConfigDelete(d *schema.ResourceD func resourceIdentityPlatformDefaultSupportedIdpConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/defaultSupportedIdpConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/defaultSupportedIdpConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_inbound_saml_config.go b/google/services/identityplatform/resource_identity_platform_inbound_saml_config.go index 77bdc38c96b..c71a2b3e08f 100644 --- a/google/services/identityplatform/resource_identity_platform_inbound_saml_config.go +++ b/google/services/identityplatform/resource_identity_platform_inbound_saml_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformInboundSamlConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -436,9 +441,9 @@ func resourceIdentityPlatformInboundSamlConfigDelete(d *schema.ResourceData, met func resourceIdentityPlatformInboundSamlConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/inboundSamlConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/inboundSamlConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_oauth_idp_config.go b/google/services/identityplatform/resource_identity_platform_oauth_idp_config.go index 4000aacd2d6..037676983d1 100644 --- a/google/services/identityplatform/resource_identity_platform_oauth_idp_config.go +++ b/google/services/identityplatform/resource_identity_platform_oauth_idp_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformOauthIdpConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "client_id": { Type: schema.TypeString, @@ -394,9 +399,9 @@ func resourceIdentityPlatformOauthIdpConfigDelete(d *schema.ResourceData, meta i func resourceIdentityPlatformOauthIdpConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/oauthIdpConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/oauthIdpConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_project_default_config.go b/google/services/identityplatform/resource_identity_platform_project_default_config.go index 71a926de19d..613d3668f2c 100644 --- a/google/services/identityplatform/resource_identity_platform_project_default_config.go +++ b/google/services/identityplatform/resource_identity_platform_project_default_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformProjectDefaultConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + DeprecationMessage: "Deprecated. Use the `google_identity_platform_config` resource instead. " + "It contains a more comprehensive list of fields, and was created before " + "`google_identity_platform_project_default_config` was added.", @@ -400,9 +405,9 @@ func resourceIdentityPlatformProjectDefaultConfigDelete(d *schema.ResourceData, func resourceIdentityPlatformProjectDefaultConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/config/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/config/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_tenant.go b/google/services/identityplatform/resource_identity_platform_tenant.go index 9dfe2abd1ab..9bb72d7aa50 100644 --- a/google/services/identityplatform/resource_identity_platform_tenant.go +++ b/google/services/identityplatform/resource_identity_platform_tenant.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformTenant() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -383,9 +388,9 @@ func resourceIdentityPlatformTenantDelete(d *schema.ResourceData, meta interface func resourceIdentityPlatformTenantImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/tenants/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/tenants/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go b/google/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go index d385b673c47..e4693cb3194 100644 --- a/google/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go +++ b/google/services/identityplatform/resource_identity_platform_tenant_default_supported_idp_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformTenantDefaultSupportedIdpConfig() *schema.Resource Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "client_id": { Type: schema.TypeString, @@ -374,9 +379,9 @@ func resourceIdentityPlatformTenantDefaultSupportedIdpConfigDelete(d *schema.Res func resourceIdentityPlatformTenantDefaultSupportedIdpConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/tenants/(?P[^/]+)/defaultSupportedIdpConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/tenants/(?P[^/]+)/defaultSupportedIdpConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go b/google/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go index f71e1a0bbab..00c17cab901 100644 --- a/google/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go +++ b/google/services/identityplatform/resource_identity_platform_tenant_inbound_saml_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformTenantInboundSamlConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -442,9 +447,9 @@ func resourceIdentityPlatformTenantInboundSamlConfigDelete(d *schema.ResourceDat func resourceIdentityPlatformTenantInboundSamlConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/tenants/(?P[^/]+)/inboundSamlConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/tenants/(?P[^/]+)/inboundSamlConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go b/google/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go index f22421143c1..20ac51c1b41 100644 --- a/google/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go +++ b/google/services/identityplatform/resource_identity_platform_tenant_oauth_idp_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceIdentityPlatformTenantOauthIdpConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "client_id": { Type: schema.TypeString, @@ -400,9 +405,9 @@ func resourceIdentityPlatformTenantOauthIdpConfigDelete(d *schema.ResourceData, func resourceIdentityPlatformTenantOauthIdpConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/tenants/(?P[^/]+)/oauthIdpConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/tenants/(?P[^/]+)/oauthIdpConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/kms/data_source_google_kms_crypto_key.go b/google/services/kms/data_source_google_kms_crypto_key.go index 45b0df810e8..8628b70ab5c 100644 --- a/google/services/kms/data_source_google_kms_crypto_key.go +++ b/google/services/kms/data_source_google_kms_crypto_key.go @@ -3,6 +3,8 @@ package kms import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -33,7 +35,20 @@ func dataSourceGoogleKmsCryptoKeyRead(d *schema.ResourceData, meta interface{}) Name: d.Get("name").(string), } - d.SetId(cryptoKeyId.CryptoKeyId()) + id := cryptoKeyId.CryptoKeyId() + d.SetId(id) + + err = resourceKMSCryptoKeyRead(d, meta) + if err != nil { + return err + } - return resourceKMSCryptoKeyRead(d, meta) + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/kms/data_source_google_kms_crypto_key_version.go b/google/services/kms/data_source_google_kms_crypto_key_version.go index e72a7067d9e..04cce669b42 100644 --- a/google/services/kms/data_source_google_kms_crypto_key_version.go +++ b/google/services/kms/data_source_google_kms_crypto_key_version.go @@ -89,7 +89,7 @@ func dataSourceGoogleKmsCryptoKeyVersionRead(d *schema.ResourceData, meta interf UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("KmsCryptoKeyVersion %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("KmsCryptoKeyVersion %q", d.Id()), url) } if err := d.Set("version", flattenKmsCryptoKeyVersionVersion(res["name"], d)); err != nil { @@ -122,7 +122,7 @@ func dataSourceGoogleKmsCryptoKeyVersionRead(d *schema.ResourceData, meta interf UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("KmsCryptoKey %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("KmsCryptoKey %q", d.Id()), url) } if res["purpose"] == "ASYMMETRIC_SIGN" || res["purpose"] == "ASYMMETRIC_DECRYPT" { diff --git a/google/services/kms/data_source_google_kms_key_ring.go b/google/services/kms/data_source_google_kms_key_ring.go index 3b1e2337a41..57654b3a79a 100644 --- a/google/services/kms/data_source_google_kms_key_ring.go +++ b/google/services/kms/data_source_google_kms_key_ring.go @@ -3,6 +3,8 @@ package kms import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -33,7 +35,16 @@ func dataSourceGoogleKmsKeyRingRead(d *schema.ResourceData, meta interface{}) er Location: d.Get("location").(string), Project: project, } - d.SetId(keyRingId.KeyRingId()) + id := keyRingId.KeyRingId() + d.SetId(id) + + err = resourceKMSKeyRingRead(d, meta) + if err != nil { + return err + } - return resourceKMSKeyRingRead(d, meta) + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/kms/resource_kms_crypto_key.go b/google/services/kms/resource_kms_crypto_key.go index bd4c537d4ba..49bff832bb4 100644 --- a/google/services/kms/resource_kms_crypto_key.go +++ b/google/services/kms/resource_kms_crypto_key.go @@ -26,6 +26,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -59,6 +60,9 @@ func ResourceKMSCryptoKey() *schema.Resource { Version: 0, }, }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "key_ring": { @@ -91,10 +95,14 @@ If not specified at creation time, the default duration is 24 hours.`, Description: `Whether this key may contain imported versions only.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels with user-defined metadata to apply to this resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels with user-defined metadata to apply to this resource. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "purpose": { Type: schema.TypeString, @@ -146,6 +154,19 @@ See the [algorithm reference](https://cloud.google.com/kms/docs/reference/rest/v }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, UseJSONNumber: true, } @@ -159,12 +180,6 @@ func resourceKMSCryptoKeyCreate(d *schema.ResourceData, meta interface{}) error } obj := make(map[string]interface{}) - labelsProp, err := expandKMSCryptoKeyLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } purposeProp, err := expandKMSCryptoKeyPurpose(d.Get("purpose"), d, config) if err != nil { return err @@ -195,6 +210,12 @@ func resourceKMSCryptoKeyCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("import_only"); !tpgresource.IsEmptyValue(reflect.ValueOf(importOnlyProp)) && (ok || !reflect.DeepEqual(v, importOnlyProp)) { obj["importOnly"] = importOnlyProp } + labelsProp, err := expandKMSCryptoKeyEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceKMSCryptoKeyEncoder(d, meta, obj) if err != nil { @@ -307,6 +328,12 @@ func resourceKMSCryptoKeyRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("import_only", flattenKMSCryptoKeyImportOnly(res["importOnly"], d, config)); err != nil { return fmt.Errorf("Error reading CryptoKey: %s", err) } + if err := d.Set("terraform_labels", flattenKMSCryptoKeyTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CryptoKey: %s", err) + } + if err := d.Set("effective_labels", flattenKMSCryptoKeyEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CryptoKey: %s", err) + } return nil } @@ -321,12 +348,6 @@ func resourceKMSCryptoKeyUpdate(d *schema.ResourceData, meta interface{}) error billingProject := "" obj := make(map[string]interface{}) - labelsProp, err := expandKMSCryptoKeyLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } rotationPeriodProp, err := expandKMSCryptoKeyRotationPeriod(d.Get("rotation_period"), d, config) if err != nil { return err @@ -339,6 +360,12 @@ func resourceKMSCryptoKeyUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("version_template"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, versionTemplateProp)) { obj["versionTemplate"] = versionTemplateProp } + labelsProp, err := expandKMSCryptoKeyEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceKMSCryptoKeyUpdateEncoder(d, meta, obj) if err != nil { @@ -353,10 +380,6 @@ func resourceKMSCryptoKeyUpdate(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Updating CryptoKey %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("rotation_period") { updateMask = append(updateMask, "rotationPeriod", "nextRotationTime") @@ -365,6 +388,10 @@ func resourceKMSCryptoKeyUpdate(d *schema.ResourceData, meta interface{}) error if d.HasChange("version_template") { updateMask = append(updateMask, "versionTemplate.algorithm") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -463,7 +490,18 @@ func resourceKMSCryptoKeyImport(d *schema.ResourceData, meta interface{}) ([]*sc } func flattenKMSCryptoKeyLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenKMSCryptoKeyPurpose(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -505,15 +543,23 @@ func flattenKMSCryptoKeyImportOnly(v interface{}, d *schema.ResourceData, config return v } -func expandKMSCryptoKeyLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenKMSCryptoKeyTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenKMSCryptoKeyEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandKMSCryptoKeyPurpose(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -566,6 +612,17 @@ func expandKMSCryptoKeyImportOnly(v interface{}, d tpgresource.TerraformResource return v, nil } +func expandKMSCryptoKeyEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceKMSCryptoKeyEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { // if rotationPeriod is set, nextRotationTime must also be set. if d.Get("rotation_period") != "" { diff --git a/google/services/kms/resource_kms_crypto_key_test.go b/google/services/kms/resource_kms_crypto_key_test.go index 7f353f692c9..b810cc9f5a5 100644 --- a/google/services/kms/resource_kms_crypto_key_test.go +++ b/google/services/kms/resource_kms_crypto_key_test.go @@ -154,16 +154,18 @@ func TestAccKmsCryptoKey_basic(t *testing.T) { Config: testGoogleKmsCryptoKey_basic(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), }, { - ResourceName: "google_kms_crypto_key.crypto_key", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key.crypto_key", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, // Test importing with a short id { - ResourceName: "google_kms_crypto_key.crypto_key", - ImportState: true, - ImportStateId: fmt.Sprintf("%s/%s/%s/%s", projectId, location, keyRingName, cryptoKeyName), - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key.crypto_key", + ImportState: true, + ImportStateId: fmt.Sprintf("%s/%s/%s/%s", projectId, location, keyRingName, cryptoKeyName), + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, // Use a separate TestStep rather than a CheckDestroy because we need the project to still exist. { @@ -296,9 +298,10 @@ func TestAccKmsCryptoKey_destroyDuration(t *testing.T) { Config: testGoogleKmsCryptoKey_destroyDuration(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), }, { - ResourceName: "google_kms_crypto_key.crypto_key", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key.crypto_key", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, // Use a separate TestStep rather than a CheckDestroy because we need the project to still exist. { @@ -334,7 +337,7 @@ func TestAccKmsCryptoKey_importOnly(t *testing.T) { ResourceName: "google_kms_crypto_key.crypto_key", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"skip_initial_version_creation"}, + ImportStateVerifyIgnore: []string{"skip_initial_version_creation", "labels", "terraform_labels"}, }, // Use a separate TestStep rather than a CheckDestroy because we need the project to still exist. { @@ -428,9 +431,10 @@ func TestAccKmsCryptoKeyVersion_basic(t *testing.T) { Config: testGoogleKmsCryptoKeyVersion_basic(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), }, { - ResourceName: "google_kms_crypto_key_version.crypto_key_version", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key_version.crypto_key_version", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleKmsCryptoKeyVersion_removed(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), @@ -456,9 +460,10 @@ func TestAccKmsCryptoKeyVersion_skipInitialVersion(t *testing.T) { Config: testGoogleKmsCryptoKeyVersion_skipInitialVersion(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), }, { - ResourceName: "google_kms_crypto_key_version.crypto_key_version", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key_version.crypto_key_version", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -482,17 +487,19 @@ func TestAccKmsCryptoKeyVersion_patch(t *testing.T) { Config: testGoogleKmsCryptoKeyVersion_patchInitialize(projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName), }, { - ResourceName: "google_kms_crypto_key_version.crypto_key_version", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key_version.crypto_key_version", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleKmsCryptoKeyVersion_patch("true", projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName, state), }, { - ResourceName: "google_kms_crypto_key_version.crypto_key_version", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_kms_crypto_key_version.crypto_key_version", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testGoogleKmsCryptoKeyVersion_patch("false", projectId, projectOrg, projectBillingAccount, keyRingName, cryptoKeyName, state), diff --git a/google/services/kms/resource_kms_key_ring.go b/google/services/kms/resource_kms_key_ring.go index d2f1c44b9ab..72fed68b8d1 100644 --- a/google/services/kms/resource_kms_key_ring.go +++ b/google/services/kms/resource_kms_key_ring.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,10 @@ func ResourceKMSKeyRing() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -210,9 +215,9 @@ func resourceKMSKeyRingDelete(d *schema.ResourceData, meta interface{}) error { func resourceKMSKeyRingImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/keyRings/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/keyRings/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/logging/data_source_google_logging_project_cmek_settings.go b/google/services/logging/data_source_google_logging_project_cmek_settings.go index e79d4dd1c83..e2d617f967a 100644 --- a/google/services/logging/data_source_google_logging_project_cmek_settings.go +++ b/google/services/logging/data_source_google_logging_project_cmek_settings.go @@ -87,7 +87,7 @@ func dataSourceGoogleLoggingProjectCmekSettingsRead(d *schema.ResourceData, meta UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("LoggingProjectCmekSettings %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("LoggingProjectCmekSettings %q", d.Id()), url) } d.SetId(fmt.Sprintf("projects/%s/cmekSettings", project)) diff --git a/google/services/logging/data_source_google_logging_sink.go b/google/services/logging/data_source_google_logging_sink.go index 0315bdd6e7e..4979813e0b6 100644 --- a/google/services/logging/data_source_google_logging_sink.go +++ b/google/services/logging/data_source_google_logging_sink.go @@ -35,7 +35,7 @@ func dataSourceGoogleLoggingSinkRead(d *schema.ResourceData, meta interface{}) e sink, err := config.NewLoggingClient(userAgent).Sinks.Get(sinkId).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Logging Sink %s", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Logging Sink %s", d.Id()), sinkId) } if err := flattenResourceLoggingSink(d, sink); err != nil { diff --git a/google/services/logging/resource_logging_bucket_config.go b/google/services/logging/resource_logging_bucket_config.go index 8cdcfd0220b..978c720c5cc 100644 --- a/google/services/logging/resource_logging_bucket_config.go +++ b/google/services/logging/resource_logging_bucket_config.go @@ -8,6 +8,7 @@ import ( "regexp" "strings" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -107,6 +108,9 @@ func ResourceLoggingBucketConfig(parentType string, parentSpecificSchema map[str }, Schema: tpgresource.MergeSchemas(loggingBucketConfigSchema, parentSpecificSchema), UseJSONNumber: true, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), } } diff --git a/google/services/logging/resource_logging_linked_dataset.go b/google/services/logging/resource_logging_linked_dataset.go index ad850aa979e..f73383c95bd 100644 --- a/google/services/logging/resource_logging_linked_dataset.go +++ b/google/services/logging/resource_logging_linked_dataset.go @@ -303,7 +303,7 @@ func resourceLoggingLinkedDatasetDelete(d *schema.ResourceData, meta interface{} func resourceLoggingLinkedDatasetImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/locations/(?P[^/]+)/buckets/(?P[^/]+)/links/(?P[^/]+)", + "^(?P.+)/locations/(?P[^/]+)/buckets/(?P[^/]+)/links/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/logging/resource_logging_log_view.go b/google/services/logging/resource_logging_log_view.go index 56cda502946..e13dfe5ea3c 100644 --- a/google/services/logging/resource_logging_log_view.go +++ b/google/services/logging/resource_logging_log_view.go @@ -334,7 +334,7 @@ func resourceLoggingLogViewDelete(d *schema.ResourceData, meta interface{}) erro func resourceLoggingLogViewImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/locations/(?P[^/]+)/buckets/(?P[^/]+)/views/(?P[^/]+)", + "^(?P.+)/locations/(?P[^/]+)/buckets/(?P[^/]+)/views/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/logging/resource_logging_metric.go b/google/services/logging/resource_logging_metric.go index 379a4990608..b7b2e88b073 100644 --- a/google/services/logging/resource_logging_metric.go +++ b/google/services/logging/resource_logging_metric.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceLoggingMetric() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "filter": { Type: schema.TypeString, @@ -105,22 +110,19 @@ the lower bound. Each bucket represents a constant relative uncertainty on a spe Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "growth_factor": { - Type: schema.TypeFloat, - Optional: true, - Description: `Must be greater than 1.`, - AtLeastOneOf: []string{"bucket_options.0.exponential_buckets.0.num_finite_buckets", "bucket_options.0.exponential_buckets.0.growth_factor", "bucket_options.0.exponential_buckets.0.scale"}, + Type: schema.TypeFloat, + Required: true, + Description: `Must be greater than 1.`, }, "num_finite_buckets": { - Type: schema.TypeInt, - Optional: true, - Description: `Must be greater than 0.`, - AtLeastOneOf: []string{"bucket_options.0.exponential_buckets.0.num_finite_buckets", "bucket_options.0.exponential_buckets.0.growth_factor", "bucket_options.0.exponential_buckets.0.scale"}, + Type: schema.TypeInt, + Required: true, + Description: `Must be greater than 0.`, }, "scale": { - Type: schema.TypeFloat, - Optional: true, - Description: `Must be greater than 0.`, - AtLeastOneOf: []string{"bucket_options.0.exponential_buckets.0.num_finite_buckets", "bucket_options.0.exponential_buckets.0.growth_factor", "bucket_options.0.exponential_buckets.0.scale"}, + Type: schema.TypeFloat, + Required: true, + Description: `Must be greater than 0.`, }, }, }, @@ -135,22 +137,19 @@ Each bucket represents a constant absolute uncertainty on the specific value in Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "num_finite_buckets": { - Type: schema.TypeInt, - Optional: true, - Description: `Must be greater than 0.`, - AtLeastOneOf: []string{"bucket_options.0.linear_buckets.0.num_finite_buckets", "bucket_options.0.linear_buckets.0.width", "bucket_options.0.linear_buckets.0.offset"}, + Type: schema.TypeInt, + Required: true, + Description: `Must be greater than 0.`, }, "offset": { - Type: schema.TypeFloat, - Optional: true, - Description: `Lower bound of the first bucket.`, - AtLeastOneOf: []string{"bucket_options.0.linear_buckets.0.num_finite_buckets", "bucket_options.0.linear_buckets.0.width", "bucket_options.0.linear_buckets.0.offset"}, + Type: schema.TypeFloat, + Required: true, + Description: `Lower bound of the first bucket.`, }, "width": { - Type: schema.TypeFloat, - Optional: true, - Description: `Must be greater than 0.`, - AtLeastOneOf: []string{"bucket_options.0.linear_buckets.0.num_finite_buckets", "bucket_options.0.linear_buckets.0.width", "bucket_options.0.linear_buckets.0.offset"}, + Type: schema.TypeFloat, + Required: true, + Description: `Must be greater than 0.`, }, }, }, diff --git a/google/services/logging/resource_logging_project_sink.go b/google/services/logging/resource_logging_project_sink.go index d3d6780b1e1..623351d765e 100644 --- a/google/services/logging/resource_logging_project_sink.go +++ b/google/services/logging/resource_logging_project_sink.go @@ -1,5 +1,7 @@ // Copyright (c) HashiCorp, Inc. // SPDX-License-Identifier: MPL-2.0 +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 package logging import ( @@ -38,8 +40,8 @@ func ResourceLoggingProjectSink() *schema.Resource { schm.Schema["unique_writer_identity"] = &schema.Schema{ Type: schema.TypeBool, Optional: true, - Default: false, - Description: `Whether or not to create a unique identity associated with this sink. If false (the default), then the writer_identity used is serviceAccount:cloud-logs@system.gserviceaccount.com. If true, then a unique service account is created and used for this sink. If you wish to publish logs across projects, you must set unique_writer_identity to true.`, + Default: true, + Description: `Whether or not to create a unique identity associated with this sink. If false (the legacy behavior), then the writer_identity used is serviceAccount:cloud-logs@system.gserviceaccount.com. If true, then a unique service account is created and used for this sink. If you wish to publish logs across projects, you must set unique_writer_identity to true.`, } return schm } diff --git a/google/services/logging/resource_logging_project_sink_test.go b/google/services/logging/resource_logging_project_sink_test.go index aba5290745e..32e1abba6be 100644 --- a/google/services/logging/resource_logging_project_sink_test.go +++ b/google/services/logging/resource_logging_project_sink_test.go @@ -294,13 +294,11 @@ func testAccLoggingProjectSink_basic(name, project, bucketName string) string { resource "google_logging_project_sink" "basic" { name = "%s" project = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" - - unique_writer_identity = false } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -312,14 +310,14 @@ func testAccLoggingProjectSink_described(name, project, bucketName string) strin resource "google_logging_project_sink" "described" { name = "%s" project = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" description = "this is a description for a project level logging sink" - + unique_writer_identity = false } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -331,14 +329,14 @@ func testAccLoggingProjectSink_described_update(name, project, bucketName string resource "google_logging_project_sink" "described" { name = "%s" project = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" description = "description updated" - + unique_writer_identity = true } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -350,14 +348,14 @@ func testAccLoggingProjectSink_disabled(name, project, bucketName string) string resource "google_logging_project_sink" "disabled" { name = "%s" project = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" disabled = true unique_writer_identity = false } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -369,14 +367,14 @@ func testAccLoggingProjectSink_disabled_update(name, project, bucketName, disabl resource "google_logging_project_sink" "disabled" { name = "%s" project = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" disabled = "%s" unique_writer_identity = true } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -387,13 +385,13 @@ func testAccLoggingProjectSink_uniqueWriter(name, bucketName string) string { return fmt.Sprintf(` resource "google_logging_project_sink" "unique_writer" { name = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=ERROR" unique_writer_identity = true } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -404,13 +402,13 @@ func testAccLoggingProjectSink_uniqueWriterUpdated(name, bucketName string) stri return fmt.Sprintf(` resource "google_logging_project_sink" "unique_writer" { name = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=WARNING" unique_writer_identity = true } -resource "google_storage_bucket" "log-bucket" { +resource "google_storage_bucket" "gcs-bucket" { name = "%s" location = "US" } @@ -422,7 +420,7 @@ func testAccLoggingProjectSink_heredoc(name, project, bucketName string) string resource "google_logging_project_sink" "heredoc" { name = "%s" project = "%s" - destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" + destination = "storage.googleapis.com/${google_storage_bucket.gcs-bucket.name}" filter = <=ERROR" unique_writer_identity = true @@ -456,7 +454,7 @@ resource "google_logging_project_sink" "bigquery" { } } -resource "google_bigquery_dataset" "logging_sink" { +resource "google_bigquery_dataset" "bq_dataset" { dataset_id = "%s" description = "Log sink (generated during acc test of terraform-provider-google(-beta))." } @@ -467,13 +465,13 @@ func testAccLoggingProjectSink_bigquery_after(sinkName, bqDatasetID string) stri return fmt.Sprintf(` resource "google_logging_project_sink" "bigquery" { name = "%s" - destination = "bigquery.googleapis.com/projects/%s/datasets/${google_bigquery_dataset.logging_sink.dataset_id}" + destination = "bigquery.googleapis.com/projects/%s/datasets/${google_bigquery_dataset.bq_dataset.dataset_id}" filter = "logName=\"projects/%s/logs/compute.googleapis.com%%2Factivity_log\" AND severity>=WARNING" unique_writer_identity = true } -resource "google_bigquery_dataset" "logging_sink" { +resource "google_bigquery_dataset" "bq_dataset" { dataset_id = "%s" description = "Log sink (generated during acc test of terraform-provider-google(-beta))." } @@ -497,8 +495,6 @@ resource "google_logging_project_sink" "loggingbucket" { description = "test-2" filter = "resource.type = k8s_container" } - - unique_writer_identity = true } `, name, project, project) diff --git a/google/services/looker/resource_looker_instance.go b/google/services/looker/resource_looker_instance.go index 9e52f9fe24c..6585ccb0286 100644 --- a/google/services/looker/resource_looker_instance.go +++ b/google/services/looker/resource_looker_instance.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -49,6 +50,10 @@ func ResourceLookerInstance() *schema.Resource { Delete: schema.DefaultTimeout(90 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -308,14 +313,13 @@ disrupt service.`, Type: schema.TypeString, Optional: true, ForceNew: true, - ValidateFunc: verify.ValidateEnum([]string{"LOOKER_CORE_TRIAL", "LOOKER_CORE_STANDARD", "LOOKER_CORE_STANDARD_ANNUAL", "LOOKER_CORE_ENTERPRISE_ANNUAL", "LOOKER_CORE_EMBED_ANNUAL", "LOOKER_MODELER", ""}), + ValidateFunc: verify.ValidateEnum([]string{"LOOKER_CORE_TRIAL", "LOOKER_CORE_STANDARD", "LOOKER_CORE_STANDARD_ANNUAL", "LOOKER_CORE_ENTERPRISE_ANNUAL", "LOOKER_CORE_EMBED_ANNUAL", ""}), Description: `Platform editions for a Looker instance. Each edition maps to a set of instance features, like its size. Must be one of these values: - LOOKER_CORE_TRIAL: trial instance - LOOKER_CORE_STANDARD: pay as you go standard instance - LOOKER_CORE_STANDARD_ANNUAL: subscription standard instance - LOOKER_CORE_ENTERPRISE_ANNUAL: subscription enterprise instance -- LOOKER_CORE_EMBED_ANNUAL: subscription embed instance -- LOOKER_MODELER: standalone modeling service Default value: "LOOKER_CORE_TRIAL" Possible values: ["LOOKER_CORE_TRIAL", "LOOKER_CORE_STANDARD", "LOOKER_CORE_STANDARD_ANNUAL", "LOOKER_CORE_ENTERPRISE_ANNUAL", "LOOKER_CORE_EMBED_ANNUAL", "LOOKER_MODELER"]`, +- LOOKER_CORE_EMBED_ANNUAL: subscription embed instance Default value: "LOOKER_CORE_TRIAL" Possible values: ["LOOKER_CORE_TRIAL", "LOOKER_CORE_STANDARD", "LOOKER_CORE_STANDARD_ANNUAL", "LOOKER_CORE_ENTERPRISE_ANNUAL", "LOOKER_CORE_EMBED_ANNUAL"]`, Default: "LOOKER_CORE_TRIAL", }, "private_ip_enabled": { @@ -882,10 +886,10 @@ func resourceLookerInstanceDelete(d *schema.ResourceData, meta interface{}) erro func resourceLookerInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/looker/resource_looker_instance_generated_test.go b/google/services/looker/resource_looker_instance_generated_test.go index 8f8f0216c22..7c9ea843bae 100644 --- a/google/services/looker/resource_looker_instance_generated_test.go +++ b/google/services/looker/resource_looker_instance_generated_test.go @@ -149,7 +149,6 @@ func TestAccLookerInstance_lookerInstanceEnterpriseFullExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "looker-instance-enterprise"), "kms_key_name": acctest.BootstrapKMSKeyInLocation(t, "us-central1").CryptoKey.Name, "random_suffix": acctest.RandString(t, 10), } @@ -181,7 +180,7 @@ resource "google_looker_instance" "looker-instance" { private_ip_enabled = true public_ip_enabled = false reserved_range = "${google_compute_global_address.looker_range.name}" - consumer_network = data.google_compute_network.looker_network.id + consumer_network = google_compute_network.looker_network.id admin_settings { allowed_email_domains = ["google.com"] } @@ -225,7 +224,7 @@ resource "google_looker_instance" "looker-instance" { } resource "google_service_networking_connection" "looker_vpc_connection" { - network = data.google_compute_network.looker_network.id + network = google_compute_network.looker_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.looker_range.name] } @@ -235,13 +234,13 @@ resource "google_compute_global_address" "looker_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 20 - network = data.google_compute_network.looker_network.id + network = google_compute_network.looker_network.id } data "google_project" "project" {} -data "google_compute_network" "looker_network" { - name = "%{network_name}" +resource "google_compute_network" "looker_network" { + name = "tf-test-looker-network%{random_suffix}" } resource "google_kms_crypto_key_iam_member" "crypto_key" { diff --git a/google/services/memcache/resource_memcache_instance.go b/google/services/memcache/resource_memcache_instance.go index 19918b9ff81..ad8017826cb 100644 --- a/google/services/memcache/resource_memcache_instance.go +++ b/google/services/memcache/resource_memcache_instance.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -49,6 +50,11 @@ func ResourceMemcacheInstance() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -97,10 +103,14 @@ func ResourceMemcacheInstance() *schema.Resource { Description: `A user-visible name for the instance.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user-provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user-provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "maintenance_policy": { Type: schema.TypeList, @@ -262,6 +272,12 @@ provided, all zones will be used.`, Computed: true, Description: `Endpoint for Discovery API`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "maintenance_schedule": { Type: schema.TypeList, Computed: true, @@ -332,6 +348,13 @@ resolution and up to nine fractional digits.`, }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -357,12 +380,6 @@ func resourceMemcacheInstanceCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandMemcacheInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } zonesProp, err := expandMemcacheInstanceZones(d.Get("zones"), d, config) if err != nil { return err @@ -405,6 +422,12 @@ func resourceMemcacheInstanceCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("maintenance_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(maintenancePolicyProp)) && (ok || !reflect.DeepEqual(v, maintenancePolicyProp)) { obj["maintenancePolicy"] = maintenancePolicyProp } + labelsProp, err := expandMemcacheInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{MemcacheBasePath}}projects/{{project}}/locations/{{region}}/instances?instanceId={{name}}") if err != nil { @@ -552,6 +575,12 @@ func resourceMemcacheInstanceRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("maintenance_schedule", flattenMemcacheInstanceMaintenanceSchedule(res["maintenanceSchedule"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenMemcacheInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenMemcacheInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -578,12 +607,6 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandMemcacheInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } nodeCountProp, err := expandMemcacheInstanceNodeCount(d.Get("node_count"), d, config) if err != nil { return err @@ -602,6 +625,12 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("maintenance_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, maintenancePolicyProp)) { obj["maintenancePolicy"] = maintenancePolicyProp } + labelsProp, err := expandMemcacheInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{MemcacheBasePath}}projects/{{project}}/locations/{{region}}/instances/{{name}}") if err != nil { @@ -615,10 +644,6 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("node_count") { updateMask = append(updateMask, "nodeCount") } @@ -630,6 +655,10 @@ func resourceMemcacheInstanceUpdate(d *schema.ResourceData, meta interface{}) er if d.HasChange("maintenance_policy") { updateMask = append(updateMask, "maintenancePolicy") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -725,10 +754,10 @@ func resourceMemcacheInstanceDelete(d *schema.ResourceData, meta interface{}) er func resourceMemcacheInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -811,7 +840,18 @@ func flattenMemcacheInstanceDiscoveryEndpoint(v interface{}, d *schema.ResourceD } func flattenMemcacheInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenMemcacheInstanceMemcacheFullVersion(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1094,19 +1134,27 @@ func flattenMemcacheInstanceMaintenanceScheduleScheduleDeadlineTime(v interface{ return v } -func expandMemcacheInstanceDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandMemcacheInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenMemcacheInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenMemcacheInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandMemcacheInstanceDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandMemcacheInstanceZones(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -1357,3 +1405,14 @@ func expandMemcacheInstanceMaintenancePolicyWeeklyMaintenanceWindowStartTimeSeco func expandMemcacheInstanceMaintenancePolicyWeeklyMaintenanceWindowStartTimeNanos(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandMemcacheInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/memcache/resource_memcache_instance_generated_test.go b/google/services/memcache/resource_memcache_instance_generated_test.go index ddc749cc9d9..4fe348d082e 100644 --- a/google/services/memcache/resource_memcache_instance_generated_test.go +++ b/google/services/memcache/resource_memcache_instance_generated_test.go @@ -34,7 +34,6 @@ func TestAccMemcacheInstance_memcacheInstanceBasicExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "memcache-instance-basic"), "random_suffix": acctest.RandString(t, 10), } @@ -50,7 +49,7 @@ func TestAccMemcacheInstance_memcacheInstanceBasicExample(t *testing.T) { ResourceName: "google_memcache_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "region"}, + ImportStateVerifyIgnore: []string{"name", "region", "labels", "terraform_labels"}, }, }, }) @@ -66,8 +65,8 @@ func testAccMemcacheInstance_memcacheInstanceBasicExample(context map[string]int // If this network hasn't been created and you are using this example in your // config, add an additional network resource or change // this from "data"to "resource" -data "google_compute_network" "memcache_network" { - name = "%{network_name}" +resource "google_compute_network" "memcache_network" { + name = "tf-test-test-network%{random_suffix}" } resource "google_compute_global_address" "service_range" { @@ -75,11 +74,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.memcache_network.id + network = google_compute_network.memcache_network.id } resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.memcache_network.id + network = google_compute_network.memcache_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } @@ -88,6 +87,10 @@ resource "google_memcache_instance" "instance" { name = "tf-test-test-instance%{random_suffix}" authorized_network = google_service_networking_connection.private_service_connection.network + labels = { + env = "test" + } + node_config { cpu_count = 1 memory_size_mb = 1024 diff --git a/google/services/memcache/resource_memcache_instance_test.go b/google/services/memcache/resource_memcache_instance_test.go index 3eb9c244104..7c0cf5b4d72 100644 --- a/google/services/memcache/resource_memcache_instance_test.go +++ b/google/services/memcache/resource_memcache_instance_test.go @@ -15,7 +15,7 @@ func TestAccMemcacheInstance_update(t *testing.T) { prefix := fmt.Sprintf("%d", acctest.RandInt(t)) name := fmt.Sprintf("tf-test-%s", prefix) - network := acctest.BootstrapSharedTestNetwork(t, "memcache-instance-update") + network := acctest.BootstrapSharedServiceNetworkingConnection(t, "memcache-instance-update-1") acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -44,24 +44,10 @@ func TestAccMemcacheInstance_update(t *testing.T) { func testAccMemcacheInstance_update(prefix, name, network string) string { return fmt.Sprintf(` -resource "google_compute_global_address" "service_range" { - name = "tf-test%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.memcache_network.id -} - -resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.memcache_network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.service_range.name] -} - resource "google_memcache_instance" "test" { name = "%s" region = "us-central1" - authorized_network = google_service_networking_connection.private_service_connection.network + authorized_network = data.google_compute_network.memcache_network.id node_config { cpu_count = 1 @@ -80,29 +66,15 @@ resource "google_memcache_instance" "test" { data "google_compute_network" "memcache_network" { name = "%s" } -`, prefix, name, network) +`, name, network) } func testAccMemcacheInstance_update2(prefix, name, network string) string { return fmt.Sprintf(` -resource "google_compute_global_address" "service_range" { - name = "tf-test%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.memcache_network.id -} - -resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.memcache_network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.service_range.name] -} - resource "google_memcache_instance" "test" { name = "%s" region = "us-central1" - authorized_network = google_service_networking_connection.private_service_connection.network + authorized_network = data.google_compute_network.memcache_network.id node_config { cpu_count = 1 @@ -121,5 +93,5 @@ resource "google_memcache_instance" "test" { data "google_compute_network" "memcache_network" { name = "%s" } -`, prefix, name, network) +`, name, network) } diff --git a/google/services/mlengine/resource_ml_engine_model.go b/google/services/mlengine/resource_ml_engine_model.go index 43eafd13184..e2596fd98ba 100644 --- a/google/services/mlengine/resource_ml_engine_model.go +++ b/google/services/mlengine/resource_ml_engine_model.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -44,6 +45,11 @@ func ResourceMLEngineModel() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -76,11 +82,14 @@ prediction requests that do not specify a version.`, Description: `The description specified for the model when it was created.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `One or more labels that you can add, to organize your models.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `One or more labels that you can add, to organize your models. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "online_prediction_console_logging": { Type: schema.TypeBool, @@ -105,6 +114,20 @@ Currently only one region per model is supported`, Type: schema.TypeString, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -160,10 +183,10 @@ func resourceMLEngineModelCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("online_prediction_console_logging"); !tpgresource.IsEmptyValue(reflect.ValueOf(onlinePredictionConsoleLoggingProp)) && (ok || !reflect.DeepEqual(v, onlinePredictionConsoleLoggingProp)) { obj["onlinePredictionConsoleLogging"] = onlinePredictionConsoleLoggingProp } - labelsProp, err := expandMLEngineModelLabels(d.Get("labels"), d, config) + labelsProp, err := expandMLEngineModelEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -272,6 +295,12 @@ func resourceMLEngineModelRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("labels", flattenMLEngineModelLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Model: %s", err) } + if err := d.Set("terraform_labels", flattenMLEngineModelTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Model: %s", err) + } + if err := d.Set("effective_labels", flattenMLEngineModelEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Model: %s", err) + } return nil } @@ -332,9 +361,9 @@ func resourceMLEngineModelDelete(d *schema.ResourceData, meta interface{}) error func resourceMLEngineModelImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/models/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/models/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -390,6 +419,36 @@ func flattenMLEngineModelOnlinePredictionConsoleLogging(v interface{}, d *schema } func flattenMLEngineModelLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenMLEngineModelTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenMLEngineModelEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -436,7 +495,7 @@ func expandMLEngineModelOnlinePredictionConsoleLogging(v interface{}, d tpgresou return v, nil } -func expandMLEngineModelLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandMLEngineModelEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/mlengine/resource_ml_engine_model_generated_test.go b/google/services/mlengine/resource_ml_engine_model_generated_test.go index 105832873e7..b3203766660 100644 --- a/google/services/mlengine/resource_ml_engine_model_generated_test.go +++ b/google/services/mlengine/resource_ml_engine_model_generated_test.go @@ -46,9 +46,10 @@ func TestAccMLEngineModel_mlModelBasicExample(t *testing.T) { Config: testAccMLEngineModel_mlModelBasicExample(context), }, { - ResourceName: "google_ml_engine_model.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_ml_engine_model.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -80,9 +81,10 @@ func TestAccMLEngineModel_mlModelFullExample(t *testing.T) { Config: testAccMLEngineModel_mlModelFullExample(context), }, { - ResourceName: "google_ml_engine_model.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_ml_engine_model.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/monitoring/resource_monitoring_alert_policy.go b/google/services/monitoring/resource_monitoring_alert_policy.go index 3a89c9194e3..9d7709f400b 100644 --- a/google/services/monitoring/resource_monitoring_alert_policy.go +++ b/google/services/monitoring/resource_monitoring_alert_policy.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -53,6 +54,10 @@ func ResourceMonitoringAlertPolicy() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "combiner": { Type: schema.TypeString, diff --git a/google/services/monitoring/resource_monitoring_custom_service.go b/google/services/monitoring/resource_monitoring_custom_service.go index b462c586ed7..a6433d285e2 100644 --- a/google/services/monitoring/resource_monitoring_custom_service.go +++ b/google/services/monitoring/resource_monitoring_custom_service.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceMonitoringService() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/monitoring/resource_monitoring_dashboard.go b/google/services/monitoring/resource_monitoring_dashboard.go index 1f0eca31c92..298479b96a5 100644 --- a/google/services/monitoring/resource_monitoring_dashboard.go +++ b/google/services/monitoring/resource_monitoring_dashboard.go @@ -10,29 +10,51 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" ) -func monitoringDashboardDiffSuppress(k, old, new string, d *schema.ResourceData) bool { - computedFields := []string{"etag", "name"} +// This recursive function takes an old map and a new map and is intended to remove the computed keys +// from the old json string (stored in state) so that it doesn't show a diff if it's not defined in the +// new map's json string (defined in config) +func removeComputedKeys(old map[string]interface{}, new map[string]interface{}) map[string]interface{} { + for k, v := range old { + if _, ok := old[k]; ok && new[k] == nil { + delete(old, k) + continue + } + + if reflect.ValueOf(v).Kind() == reflect.Map { + old[k] = removeComputedKeys(v.(map[string]interface{}), new[k].(map[string]interface{})) + continue + } + + if reflect.ValueOf(v).Kind() == reflect.Slice { + for i, j := range v.([]interface{}) { + if reflect.ValueOf(j).Kind() == reflect.Map { + old[k].([]interface{})[i] = removeComputedKeys(j.(map[string]interface{}), new[k].([]interface{})[i].(map[string]interface{})) + } + } + continue + } + } + return old +} + +func monitoringDashboardDiffSuppress(k, old, new string, d *schema.ResourceData) bool { oldMap, err := structure.ExpandJsonFromString(old) if err != nil { return false } - newMap, err := structure.ExpandJsonFromString(new) if err != nil { return false } - for _, f := range computedFields { - delete(oldMap, f) - delete(newMap, f) - } - + oldMap = removeComputedKeys(oldMap, newMap) return reflect.DeepEqual(oldMap, newMap) } @@ -53,6 +75,10 @@ func ResourceMonitoringDashboard() *schema.Resource { Delete: schema.DefaultTimeout(4 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "dashboard_json": { Type: schema.TypeString, diff --git a/google/services/monitoring/resource_monitoring_dashboard_test.go b/google/services/monitoring/resource_monitoring_dashboard_test.go index 56e303fa27c..c0991024bbc 100644 --- a/google/services/monitoring/resource_monitoring_dashboard_test.go +++ b/google/services/monitoring/resource_monitoring_dashboard_test.go @@ -85,8 +85,6 @@ func TestAccMonitoringDashboard_rowLayout(t *testing.T) { } func TestAccMonitoringDashboard_update(t *testing.T) { - // TODO: Fix requires a breaking change https://github.com/hashicorp/terraform-provider-google/issues/9976 - t.Skip() t.Parallel() acctest.VcrTest(t, resource.TestCase{ diff --git a/google/services/monitoring/resource_monitoring_group.go b/google/services/monitoring/resource_monitoring_group.go index 36bbca39e76..749b6f1e506 100644 --- a/google/services/monitoring/resource_monitoring_group.go +++ b/google/services/monitoring/resource_monitoring_group.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceMonitoringGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/monitoring/resource_monitoring_metric_descriptor.go b/google/services/monitoring/resource_monitoring_metric_descriptor.go index 17003f9f1cf..af6f2d4361a 100644 --- a/google/services/monitoring/resource_monitoring_metric_descriptor.go +++ b/google/services/monitoring/resource_monitoring_metric_descriptor.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceMonitoringMetricDescriptor() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "description": { Type: schema.TypeString, @@ -83,6 +88,7 @@ func ResourceMonitoringMetricDescriptor() *schema.Resource { "labels": { Type: schema.TypeSet, Optional: true, + ForceNew: true, Description: `The set of labels that can be used to describe a specific instance of this metric type. In order to delete a label, the entire resource must be deleted, then created with the desired labels.`, Elem: monitoringMetricDescriptorLabelsSchema(), // Default schema.HashSchema is used. @@ -97,7 +103,6 @@ func ResourceMonitoringMetricDescriptor() *schema.Resource { "metadata": { Type: schema.TypeList, Optional: true, - ForceNew: true, Description: `Metadata which can be used to guide usage of the metric.`, MaxItems: 1, Elem: &schema.Resource{ diff --git a/google/services/monitoring/resource_monitoring_metric_descriptor_test.go b/google/services/monitoring/resource_monitoring_metric_descriptor_test.go index 49dcf2e07ac..557aadd79a8 100644 --- a/google/services/monitoring/resource_monitoring_metric_descriptor_test.go +++ b/google/services/monitoring/resource_monitoring_metric_descriptor_test.go @@ -11,8 +11,6 @@ import ( ) func TestAccMonitoringMetricDescriptor_update(t *testing.T) { - // TODO: Fix requires a breaking change https://github.com/hashicorp/terraform-provider-google/issues/12139 - t.Skip() t.Parallel() acctest.VcrTest(t, resource.TestCase{ @@ -21,8 +19,7 @@ func TestAccMonitoringMetricDescriptor_update(t *testing.T) { CheckDestroy: testAccCheckMonitoringMetricDescriptorDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccMonitoringMetricDescriptor_update("key1", "STRING", - "description1", "30s", "30s"), + Config: testAccMonitoringMetricDescriptor_update("30s", "30s"), }, { ResourceName: "google_monitoring_metric_descriptor.basic", @@ -31,8 +28,7 @@ func TestAccMonitoringMetricDescriptor_update(t *testing.T) { ImportStateVerifyIgnore: []string{"metadata", "launch_stage"}, }, { - Config: testAccMonitoringMetricDescriptor_update("key2", "INT64", - "description2", "60s", "60s"), + Config: testAccMonitoringMetricDescriptor_update("60s", "60s"), }, { ResourceName: "google_monitoring_metric_descriptor.basic", @@ -44,8 +40,7 @@ func TestAccMonitoringMetricDescriptor_update(t *testing.T) { }) } -func testAccMonitoringMetricDescriptor_update(key, valueType, description, - samplePeriod, ingestDelay string) string { +func testAccMonitoringMetricDescriptor_update(samplePeriod, ingestDelay string) string { return fmt.Sprintf(` resource "google_monitoring_metric_descriptor" "basic" { description = "Daily sales records from all branch stores." @@ -55,9 +50,9 @@ resource "google_monitoring_metric_descriptor" "basic" { value_type = "DOUBLE" unit = "{USD}" labels { - key = "%s" - value_type = "%s" - description = "%s" + key = "key" + value_type = "STRING" + description = "description" } launch_stage = "BETA" metadata { @@ -65,6 +60,6 @@ resource "google_monitoring_metric_descriptor" "basic" { ingest_delay = "%s" } } -`, key, valueType, description, samplePeriod, ingestDelay, +`, samplePeriod, ingestDelay, ) } diff --git a/google/services/monitoring/resource_monitoring_notification_channel.go b/google/services/monitoring/resource_monitoring_notification_channel.go index d5c2ab01919..9a72b272f8e 100644 --- a/google/services/monitoring/resource_monitoring_notification_channel.go +++ b/google/services/monitoring/resource_monitoring_notification_channel.go @@ -64,6 +64,7 @@ func ResourceMonitoringNotificationChannel() *schema.Resource { CustomizeDiff: customdiff.All( sensitiveLabelCustomizeDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ diff --git a/google/services/monitoring/resource_monitoring_service.go b/google/services/monitoring/resource_monitoring_service.go index 0d3b1ee6d5e..993c3f620ba 100644 --- a/google/services/monitoring/resource_monitoring_service.go +++ b/google/services/monitoring/resource_monitoring_service.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceMonitoringGenericService() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "service_id": { Type: schema.TypeString, @@ -391,9 +396,9 @@ func resourceMonitoringGenericServiceDelete(d *schema.ResourceData, meta interfa func resourceMonitoringGenericServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/services/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/services/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/monitoring/resource_monitoring_slo.go b/google/services/monitoring/resource_monitoring_slo.go index 3ccf1f76f4c..bd1e94a505e 100644 --- a/google/services/monitoring/resource_monitoring_slo.go +++ b/google/services/monitoring/resource_monitoring_slo.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -66,6 +67,10 @@ func ResourceMonitoringSlo() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "goal": { Type: schema.TypeFloat, diff --git a/google/services/monitoring/resource_monitoring_uptime_check_config.go b/google/services/monitoring/resource_monitoring_uptime_check_config.go index 5f3dbf0a121..5571b6b0f8b 100644 --- a/google/services/monitoring/resource_monitoring_uptime_check_config.go +++ b/google/services/monitoring/resource_monitoring_uptime_check_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -52,6 +53,10 @@ func ResourceMonitoringUptimeCheckConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, diff --git a/google/services/networkconnectivity/resource_network_connectivity_hub.go b/google/services/networkconnectivity/resource_network_connectivity_hub.go index a422a7cf70f..41f63d2ad99 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_hub.go +++ b/google/services/networkconnectivity/resource_network_connectivity_hub.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceNetworkConnectivityHub() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "name": { @@ -65,11 +70,10 @@ func ResourceNetworkConnectivityHub() *schema.Resource { Description: "An optional description of the hub.", }, - "labels": { + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: "Optional labels in key:value format. For more information about labels, see [Requirements for labels](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements).", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "project": { @@ -87,6 +91,13 @@ func ResourceNetworkConnectivityHub() *schema.Resource { Description: "Output only. The time the hub was created.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional labels in key:value format. For more information about labels, see [Requirements for labels](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements).\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "routing_vpcs": { Type: schema.TypeList, Computed: true, @@ -100,6 +111,12 @@ func ResourceNetworkConnectivityHub() *schema.Resource { Description: "Output only. The current lifecycle state of this hub. Possible values: STATE_UNSPECIFIED, CREATING, ACTIVE, DELETING", }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "unique_id": { Type: schema.TypeString, Computed: true, @@ -137,7 +154,7 @@ func resourceNetworkConnectivityHubCreate(d *schema.ResourceData, meta interface obj := &networkconnectivity.Hub{ Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -188,7 +205,7 @@ func resourceNetworkConnectivityHubRead(d *schema.ResourceData, meta interface{} obj := &networkconnectivity.Hub{ Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -220,8 +237,8 @@ func resourceNetworkConnectivityHubRead(d *schema.ResourceData, meta interface{} if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) @@ -229,12 +246,18 @@ func resourceNetworkConnectivityHubRead(d *schema.ResourceData, meta interface{} if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenNetworkConnectivityHubLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("routing_vpcs", flattenNetworkConnectivityHubRoutingVpcsArray(res.RoutingVpcs)); err != nil { return fmt.Errorf("error setting routing_vpcs in state: %s", err) } if err = d.Set("state", res.State); err != nil { return fmt.Errorf("error setting state in state: %s", err) } + if err = d.Set("terraform_labels", flattenNetworkConnectivityHubTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("unique_id", res.UniqueId); err != nil { return fmt.Errorf("error setting unique_id in state: %s", err) } @@ -254,7 +277,7 @@ func resourceNetworkConnectivityHubUpdate(d *schema.ResourceData, meta interface obj := &networkconnectivity.Hub{ Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } directive := tpgdclresource.UpdateDirective @@ -300,7 +323,7 @@ func resourceNetworkConnectivityHubDelete(d *schema.ResourceData, meta interface obj := &networkconnectivity.Hub{ Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), Project: dcl.String(project), } @@ -375,3 +398,33 @@ func flattenNetworkConnectivityHubRoutingVpcs(obj *networkconnectivity.HubRoutin return transformed } + +func flattenNetworkConnectivityHubLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenNetworkConnectivityHubTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/networkconnectivity/resource_network_connectivity_hub_generated_test.go b/google/services/networkconnectivity/resource_network_connectivity_hub_generated_test.go index a493dc76f5d..556ed25ea4b 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_hub_generated_test.go +++ b/google/services/networkconnectivity/resource_network_connectivity_hub_generated_test.go @@ -50,17 +50,19 @@ func TestAccNetworkConnectivityHub_BasicHub(t *testing.T) { Config: testAccNetworkConnectivityHub_BasicHub(context), }, { - ResourceName: "google_network_connectivity_hub.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_hub.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkConnectivityHub_BasicHubUpdate0(context), }, { - ResourceName: "google_network_connectivity_hub.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_hub.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -71,12 +73,11 @@ func testAccNetworkConnectivityHub_BasicHub(context map[string]interface{}) stri resource "google_network_connectivity_hub" "primary" { name = "tf-test-hub%{random_suffix}" description = "A sample hub" + project = "%{project_name}" labels = { label-one = "value-one" } - - project = "%{project_name}" } @@ -88,12 +89,11 @@ func testAccNetworkConnectivityHub_BasicHubUpdate0(context map[string]interface{ resource "google_network_connectivity_hub" "primary" { name = "tf-test-hub%{random_suffix}" description = "An updated sample hub" + project = "%{project_name}" labels = { label-two = "value-one" } - - project = "%{project_name}" } diff --git a/google/services/networkconnectivity/resource_network_connectivity_service_connection_policies_test.go b/google/services/networkconnectivity/resource_network_connectivity_service_connection_policies_test.go index 2f643e52f9f..2398e1755b3 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_service_connection_policies_test.go +++ b/google/services/networkconnectivity/resource_network_connectivity_service_connection_policies_test.go @@ -29,25 +29,28 @@ func TestAccNetworkConnectivityServiceConnectionPolicy_update(t *testing.T) { Config: testAccNetworkConnectivityServiceConnectionPolicy_basic(context), }, { - ResourceName: "google_network_connectivity_service_connection_policy.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_service_connection_policy.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkConnectivityServiceConnectionPolicy_update(context), }, { - ResourceName: "google_network_connectivity_service_connection_policy.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_service_connection_policy.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkConnectivityServiceConnectionPolicy_basic(context), }, { - ResourceName: "google_network_connectivity_service_connection_policy.default", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_service_connection_policy.default", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go b/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go index cabcec827a1..88bc81589fa 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go +++ b/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceNetworkConnectivityServiceConnectionPolicy() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -80,10 +86,14 @@ It is provided by the Service Producer. Google services have a prefix of gcp. Fo Description: `Free-text description of the resource.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `User-defined labels.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `User-defined labels. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "psc_config": { Type: schema.TypeList, @@ -113,6 +123,12 @@ It is provided by the Service Producer. Google services have a prefix of gcp. Fo Computed: true, Description: `The timestamp when the resource was created.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -224,6 +240,13 @@ facing or system internal. Possible values: ["CONNECTION_ERROR_TYPE_UNSPECIFIED" }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -278,10 +301,10 @@ func resourceNetworkConnectivityServiceConnectionPolicyCreate(d *schema.Resource } else if v, ok := d.GetOkExists("etag"); !tpgresource.IsEmptyValue(reflect.ValueOf(etagProp)) && (ok || !reflect.DeepEqual(v, etagProp)) { obj["etag"] = etagProp } - labelsProp, err := expandNetworkConnectivityServiceConnectionPolicyLabels(d.Get("labels"), d, config) + labelsProp, err := expandNetworkConnectivityServiceConnectionPolicyEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -409,6 +432,12 @@ func resourceNetworkConnectivityServiceConnectionPolicyRead(d *schema.ResourceDa if err := d.Set("labels", flattenNetworkConnectivityServiceConnectionPolicyLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading ServiceConnectionPolicy: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkConnectivityServiceConnectionPolicyTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ServiceConnectionPolicy: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkConnectivityServiceConnectionPolicyEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ServiceConnectionPolicy: %s", err) + } return nil } @@ -447,10 +476,10 @@ func resourceNetworkConnectivityServiceConnectionPolicyUpdate(d *schema.Resource } else if v, ok := d.GetOkExists("etag"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, etagProp)) { obj["etag"] = etagProp } - labelsProp, err := expandNetworkConnectivityServiceConnectionPolicyLabels(d.Get("labels"), d, config) + labelsProp, err := expandNetworkConnectivityServiceConnectionPolicyEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -479,7 +508,7 @@ func resourceNetworkConnectivityServiceConnectionPolicyUpdate(d *schema.Resource updateMask = append(updateMask, "etag") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -577,9 +606,9 @@ func resourceNetworkConnectivityServiceConnectionPolicyDelete(d *schema.Resource func resourceNetworkConnectivityServiceConnectionPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/serviceConnectionPolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/serviceConnectionPolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -771,6 +800,36 @@ func flattenNetworkConnectivityServiceConnectionPolicyInfrastructure(v interface } func flattenNetworkConnectivityServiceConnectionPolicyLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNetworkConnectivityServiceConnectionPolicyTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNetworkConnectivityServiceConnectionPolicyEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -824,7 +883,7 @@ func expandNetworkConnectivityServiceConnectionPolicyEtag(v interface{}, d tpgre return v, nil } -func expandNetworkConnectivityServiceConnectionPolicyLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandNetworkConnectivityServiceConnectionPolicyEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy_generated_test.go b/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy_generated_test.go index 5c0a82f5670..0e973eb360d 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy_generated_test.go +++ b/google/services/networkconnectivity/resource_network_connectivity_service_connection_policy_generated_test.go @@ -50,7 +50,7 @@ func TestAccNetworkConnectivityServiceConnectionPolicy_networkConnectivityPolicy ResourceName: "google_network_connectivity_service_connection_policy.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkconnectivity/resource_network_connectivity_spoke.go b/google/services/networkconnectivity/resource_network_connectivity_spoke.go index 9461d02aa6e..e036125a318 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_spoke.go +++ b/google/services/networkconnectivity/resource_network_connectivity_spoke.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceNetworkConnectivitySpoke() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "hub": { @@ -80,11 +85,10 @@ func ResourceNetworkConnectivitySpoke() *schema.Resource { Description: "An optional description of the spoke.", }, - "labels": { + "effective_labels": { Type: schema.TypeMap, - Optional: true, - Description: "Optional labels in key:value format. For more information about labels, see [Requirements for labels](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements).", - Elem: &schema.Schema{Type: schema.TypeString}, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", }, "linked_interconnect_attachments": { @@ -142,12 +146,25 @@ func ResourceNetworkConnectivitySpoke() *schema.Resource { Description: "Output only. The time the spoke was created.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional labels in key:value format. For more information about labels, see [Requirements for labels](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements).\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "state": { Type: schema.TypeString, Computed: true, Description: "Output only. The current lifecycle state of this spoke. Possible values: STATE_UNSPECIFIED, CREATING, ACTIVE, DELETING", }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "unique_id": { Type: schema.TypeString, Computed: true, @@ -281,7 +298,7 @@ func resourceNetworkConnectivitySpokeCreate(d *schema.ResourceData, meta interfa Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), LinkedInterconnectAttachments: expandNetworkConnectivitySpokeLinkedInterconnectAttachments(d.Get("linked_interconnect_attachments")), LinkedRouterApplianceInstances: expandNetworkConnectivitySpokeLinkedRouterApplianceInstances(d.Get("linked_router_appliance_instances")), LinkedVPCNetwork: expandNetworkConnectivitySpokeLinkedVPCNetwork(d.Get("linked_vpc_network")), @@ -338,7 +355,7 @@ func resourceNetworkConnectivitySpokeRead(d *schema.ResourceData, meta interface Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), LinkedInterconnectAttachments: expandNetworkConnectivitySpokeLinkedInterconnectAttachments(d.Get("linked_interconnect_attachments")), LinkedRouterApplianceInstances: expandNetworkConnectivitySpokeLinkedRouterApplianceInstances(d.Get("linked_router_appliance_instances")), LinkedVPCNetwork: expandNetworkConnectivitySpokeLinkedVPCNetwork(d.Get("linked_vpc_network")), @@ -380,8 +397,8 @@ func resourceNetworkConnectivitySpokeRead(d *schema.ResourceData, meta interface if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) } if err = d.Set("linked_interconnect_attachments", flattenNetworkConnectivitySpokeLinkedInterconnectAttachments(res.LinkedInterconnectAttachments)); err != nil { return fmt.Errorf("error setting linked_interconnect_attachments in state: %s", err) @@ -401,9 +418,15 @@ func resourceNetworkConnectivitySpokeRead(d *schema.ResourceData, meta interface if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenNetworkConnectivitySpokeLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("state", res.State); err != nil { return fmt.Errorf("error setting state in state: %s", err) } + if err = d.Set("terraform_labels", flattenNetworkConnectivitySpokeTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("unique_id", res.UniqueId); err != nil { return fmt.Errorf("error setting unique_id in state: %s", err) } @@ -425,7 +448,7 @@ func resourceNetworkConnectivitySpokeUpdate(d *schema.ResourceData, meta interfa Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), LinkedInterconnectAttachments: expandNetworkConnectivitySpokeLinkedInterconnectAttachments(d.Get("linked_interconnect_attachments")), LinkedRouterApplianceInstances: expandNetworkConnectivitySpokeLinkedRouterApplianceInstances(d.Get("linked_router_appliance_instances")), LinkedVPCNetwork: expandNetworkConnectivitySpokeLinkedVPCNetwork(d.Get("linked_vpc_network")), @@ -477,7 +500,7 @@ func resourceNetworkConnectivitySpokeDelete(d *schema.ResourceData, meta interfa Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), - Labels: tpgresource.CheckStringMap(d.Get("labels")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), LinkedInterconnectAttachments: expandNetworkConnectivitySpokeLinkedInterconnectAttachments(d.Get("linked_interconnect_attachments")), LinkedRouterApplianceInstances: expandNetworkConnectivitySpokeLinkedRouterApplianceInstances(d.Get("linked_router_appliance_instances")), LinkedVPCNetwork: expandNetworkConnectivitySpokeLinkedVPCNetwork(d.Get("linked_vpc_network")), @@ -699,3 +722,33 @@ func flattenNetworkConnectivitySpokeLinkedVpnTunnels(obj *networkconnectivity.Sp return []interface{}{transformed} } + +func flattenNetworkConnectivitySpokeLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenNetworkConnectivitySpokeTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/networkconnectivity/resource_network_connectivity_spoke_generated_test.go b/google/services/networkconnectivity/resource_network_connectivity_spoke_generated_test.go index 2fa41dc1404..d3567c012fa 100644 --- a/google/services/networkconnectivity/resource_network_connectivity_spoke_generated_test.go +++ b/google/services/networkconnectivity/resource_network_connectivity_spoke_generated_test.go @@ -51,17 +51,19 @@ func TestAccNetworkConnectivitySpoke_LinkedVPCNetworkHandWritten(t *testing.T) { Config: testAccNetworkConnectivitySpoke_LinkedVPCNetworkHandWritten(context), }, { - ResourceName: "google_network_connectivity_spoke.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_spoke.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkConnectivitySpoke_LinkedVPCNetworkHandWrittenUpdate0(context), }, { - ResourceName: "google_network_connectivity_spoke.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_spoke.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -85,17 +87,19 @@ func TestAccNetworkConnectivitySpoke_RouterApplianceHandWritten(t *testing.T) { Config: testAccNetworkConnectivitySpoke_RouterApplianceHandWritten(context), }, { - ResourceName: "google_network_connectivity_spoke.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_spoke.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkConnectivitySpoke_RouterApplianceHandWrittenUpdate0(context), }, { - ResourceName: "google_network_connectivity_spoke.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_connectivity_spoke.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkmanagement/resource_network_management_connectivity_test_resource.go b/google/services/networkmanagement/resource_network_management_connectivity_test_resource.go index 7002d0f48c0..66272fd9cea 100644 --- a/google/services/networkmanagement/resource_network_management_connectivity_test_resource.go +++ b/google/services/networkmanagement/resource_network_management_connectivity_test_resource.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceNetworkManagementConnectivityTest() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "destination": { Type: schema.TypeList, @@ -199,10 +205,14 @@ The following are two cases where you must provide the project ID: Maximum of 512 characters.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user-provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user-provided metadata. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "protocol": { Type: schema.TypeString, @@ -220,6 +230,19 @@ boundaries.`, Type: schema.TypeString, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -275,10 +298,10 @@ func resourceNetworkManagementConnectivityTestCreate(d *schema.ResourceData, met } else if v, ok := d.GetOkExists("related_projects"); !tpgresource.IsEmptyValue(reflect.ValueOf(relatedProjectsProp)) && (ok || !reflect.DeepEqual(v, relatedProjectsProp)) { obj["relatedProjects"] = relatedProjectsProp } - labelsProp, err := expandNetworkManagementConnectivityTestLabels(d.Get("labels"), d, config) + labelsProp, err := expandNetworkManagementConnectivityTestEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -411,6 +434,12 @@ func resourceNetworkManagementConnectivityTestRead(d *schema.ResourceData, meta if err := d.Set("labels", flattenNetworkManagementConnectivityTestLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading ConnectivityTest: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkManagementConnectivityTestTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConnectivityTest: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkManagementConnectivityTestEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading ConnectivityTest: %s", err) + } return nil } @@ -461,10 +490,10 @@ func resourceNetworkManagementConnectivityTestUpdate(d *schema.ResourceData, met } else if v, ok := d.GetOkExists("related_projects"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, relatedProjectsProp)) { obj["relatedProjects"] = relatedProjectsProp } - labelsProp, err := expandNetworkManagementConnectivityTestLabels(d.Get("labels"), d, config) + labelsProp, err := expandNetworkManagementConnectivityTestEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -505,7 +534,7 @@ func resourceNetworkManagementConnectivityTestUpdate(d *schema.ResourceData, met updateMask = append(updateMask, "relatedProjects") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -603,9 +632,9 @@ func resourceNetworkManagementConnectivityTestDelete(d *schema.ResourceData, met func resourceNetworkManagementConnectivityTestImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/connectivityTests/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/connectivityTests/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -754,6 +783,36 @@ func flattenNetworkManagementConnectivityTestRelatedProjects(v interface{}, d *s } func flattenNetworkManagementConnectivityTestLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNetworkManagementConnectivityTestTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNetworkManagementConnectivityTestEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -923,7 +982,7 @@ func expandNetworkManagementConnectivityTestRelatedProjects(v interface{}, d tpg return v, nil } -func expandNetworkManagementConnectivityTestLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandNetworkManagementConnectivityTestEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/networkmanagement/resource_network_management_connectivity_test_resource_generated_test.go b/google/services/networkmanagement/resource_network_management_connectivity_test_resource_generated_test.go index e631f74ad50..8a68ff589f5 100644 --- a/google/services/networkmanagement/resource_network_management_connectivity_test_resource_generated_test.go +++ b/google/services/networkmanagement/resource_network_management_connectivity_test_resource_generated_test.go @@ -46,9 +46,10 @@ func TestAccNetworkManagementConnectivityTest_networkManagementConnectivityTestI Config: testAccNetworkManagementConnectivityTest_networkManagementConnectivityTestInstancesExample(context), }, { - ResourceName: "google_network_management_connectivity_test.instance-test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_management_connectivity_test.instance-test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -67,6 +68,9 @@ resource "google_network_management_connectivity_test" "instance-test" { } protocol = "TCP" + labels = { + env = "test" + } } resource "google_compute_instance" "source" { @@ -130,9 +134,10 @@ func TestAccNetworkManagementConnectivityTest_networkManagementConnectivityTestA Config: testAccNetworkManagementConnectivityTest_networkManagementConnectivityTestAddressesExample(context), }, { - ResourceName: "google_network_management_connectivity_test.address-test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_management_connectivity_test.address-test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networksecurity/resource_network_security_address_group.go b/google/services/networksecurity/resource_network_security_address_group.go index b9fc69af88d..c6746bca261 100644 --- a/google/services/networksecurity/resource_network_security_address_group.go +++ b/google/services/networksecurity/resource_network_security_address_group.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceNetworkSecurityAddressGroup() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "capacity": { Type: schema.TypeInt, @@ -88,7 +93,11 @@ The default value is 'global'.`, Type: schema.TypeMap, Optional: true, Description: `Set of label tags associated with the AddressGroup resource. -An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "parent": { @@ -104,6 +113,19 @@ An object containing a list of "key": value pairs. Example: { "name": "wrench", A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z"`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -130,12 +152,6 @@ func resourceNetworkSecurityAddressGroupCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkSecurityAddressGroupLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } typeProp, err := expandNetworkSecurityAddressGroupType(d.Get("type"), d, config) if err != nil { return err @@ -154,6 +170,12 @@ func resourceNetworkSecurityAddressGroupCreate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("capacity"); !tpgresource.IsEmptyValue(reflect.ValueOf(capacityProp)) && (ok || !reflect.DeepEqual(v, capacityProp)) { obj["capacity"] = capacityProp } + labelsProp, err := expandNetworkSecurityAddressGroupEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkSecurityBasePath}}{{parent}}/locations/{{location}}/addressGroups?addressGroupId={{name}}") if err != nil { @@ -254,6 +276,12 @@ func resourceNetworkSecurityAddressGroupRead(d *schema.ResourceData, meta interf if err := d.Set("capacity", flattenNetworkSecurityAddressGroupCapacity(res["capacity"], d, config)); err != nil { return fmt.Errorf("Error reading AddressGroup: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkSecurityAddressGroupTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AddressGroup: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkSecurityAddressGroupEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading AddressGroup: %s", err) + } return nil } @@ -274,12 +302,6 @@ func resourceNetworkSecurityAddressGroupUpdate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkSecurityAddressGroupLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } typeProp, err := expandNetworkSecurityAddressGroupType(d.Get("type"), d, config) if err != nil { return err @@ -298,6 +320,12 @@ func resourceNetworkSecurityAddressGroupUpdate(d *schema.ResourceData, meta inte } else if v, ok := d.GetOkExists("capacity"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, capacityProp)) { obj["capacity"] = capacityProp } + labelsProp, err := expandNetworkSecurityAddressGroupEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkSecurityBasePath}}{{parent}}/locations/{{location}}/addressGroups/{{name}}") if err != nil { @@ -311,10 +339,6 @@ func resourceNetworkSecurityAddressGroupUpdate(d *schema.ResourceData, meta inte updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("type") { updateMask = append(updateMask, "type") } @@ -326,6 +350,10 @@ func resourceNetworkSecurityAddressGroupUpdate(d *schema.ResourceData, meta inte if d.HasChange("capacity") { updateMask = append(updateMask, "capacity") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -414,7 +442,7 @@ func resourceNetworkSecurityAddressGroupDelete(d *schema.ResourceData, meta inte func resourceNetworkSecurityAddressGroupImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P.+)/locations/(?P[^/]+)/addressGroups/(?P[^/]+)", + "^(?P.+)/locations/(?P[^/]+)/addressGroups/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -442,7 +470,18 @@ func flattenNetworkSecurityAddressGroupUpdateTime(v interface{}, d *schema.Resou } func flattenNetworkSecurityAddressGroupLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenNetworkSecurityAddressGroupType(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -470,19 +509,27 @@ func flattenNetworkSecurityAddressGroupCapacity(v interface{}, d *schema.Resourc return v // let terraform core handle it otherwise } -func expandNetworkSecurityAddressGroupDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandNetworkSecurityAddressGroupLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenNetworkSecurityAddressGroupTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenNetworkSecurityAddressGroupEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandNetworkSecurityAddressGroupDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandNetworkSecurityAddressGroupType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -496,3 +543,14 @@ func expandNetworkSecurityAddressGroupItems(v interface{}, d tpgresource.Terrafo func expandNetworkSecurityAddressGroupCapacity(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandNetworkSecurityAddressGroupEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/networksecurity/resource_network_security_address_group_generated_test.go b/google/services/networksecurity/resource_network_security_address_group_generated_test.go index fe8481bd97e..3bd81e36998 100644 --- a/google/services/networksecurity/resource_network_security_address_group_generated_test.go +++ b/google/services/networksecurity/resource_network_security_address_group_generated_test.go @@ -51,7 +51,7 @@ func TestAccNetworkSecurityAddressGroup_networkSecurityAddressGroupsBasicExample ResourceName: "google_network_security_address_group.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"parent", "name", "location"}, + ImportStateVerifyIgnore: []string{"parent", "name", "location", "labels", "terraform_labels"}, }, }, }) @@ -90,7 +90,7 @@ func TestAccNetworkSecurityAddressGroup_networkSecurityAddressGroupsOrganization ResourceName: "google_network_security_address_group.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"parent", "name", "location"}, + ImportStateVerifyIgnore: []string{"parent", "name", "location", "labels", "terraform_labels"}, }, }, }) @@ -129,7 +129,7 @@ func TestAccNetworkSecurityAddressGroup_networkSecurityAddressGroupsAdvancedExam ResourceName: "google_network_security_address_group.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"parent", "name", "location"}, + ImportStateVerifyIgnore: []string{"parent", "name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networksecurity/resource_network_security_address_group_test.go b/google/services/networksecurity/resource_network_security_address_group_test.go index b124e2573ef..ef1e14c1519 100644 --- a/google/services/networksecurity/resource_network_security_address_group_test.go +++ b/google/services/networksecurity/resource_network_security_address_group_test.go @@ -26,17 +26,19 @@ func TestAccNetworkSecurityAddressGroups_update(t *testing.T) { Config: testAccNetworkSecurityAddressGroups_basic(addressGroupsName, projectName), }, { - ResourceName: "google_network_security_address_group.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_security_address_group.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkSecurityAddressGroups_update(addressGroupsName, projectName), }, { - ResourceName: "google_network_security_address_group.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_security_address_group.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networksecurity/resource_network_security_gateway_security_policy.go b/google/services/networksecurity/resource_network_security_gateway_security_policy.go index 6e4166f0ed0..d71998ce918 100644 --- a/google/services/networksecurity/resource_network_security_gateway_security_policy.go +++ b/google/services/networksecurity/resource_network_security_gateway_security_policy.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceNetworkSecurityGatewaySecurityPolicy() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -350,9 +355,9 @@ func resourceNetworkSecurityGatewaySecurityPolicyDelete(d *schema.ResourceData, func resourceNetworkSecurityGatewaySecurityPolicyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/gatewaySecurityPolicies/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/gatewaySecurityPolicies/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/networksecurity/resource_network_security_gateway_security_policy_rule.go b/google/services/networksecurity/resource_network_security_gateway_security_policy_rule.go index ebc9e024d54..520df457a64 100644 --- a/google/services/networksecurity/resource_network_security_gateway_security_policy_rule.go +++ b/google/services/networksecurity/resource_network_security_gateway_security_policy_rule.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceNetworkSecurityGatewaySecurityPolicyRule() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "basic_profile": { Type: schema.TypeString, @@ -503,9 +508,9 @@ func resourceNetworkSecurityGatewaySecurityPolicyRuleDelete(d *schema.ResourceDa func resourceNetworkSecurityGatewaySecurityPolicyRuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/gatewaySecurityPolicies/(?P[^/]+)/rules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/gatewaySecurityPolicies/(?P[^/]+)/rules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/networksecurity/resource_network_security_url_lists.go b/google/services/networksecurity/resource_network_security_url_lists.go index 7b6e9988843..d8ff41ea88d 100644 --- a/google/services/networksecurity/resource_network_security_url_lists.go +++ b/google/services/networksecurity/resource_network_security_url_lists.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceNetworkSecurityUrlLists() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -367,9 +372,9 @@ func resourceNetworkSecurityUrlListsDelete(d *schema.ResourceData, meta interfac func resourceNetworkSecurityUrlListsImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/urlLists/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/urlLists/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/networkservices/resource_network_services_edge_cache_keyset.go b/google/services/networkservices/resource_network_services_edge_cache_keyset.go index 3dfc6ded7fb..aa1e47b8e51 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_keyset.go +++ b/google/services/networkservices/resource_network_services_edge_cache_keyset.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceNetworkServicesEdgeCacheKeyset() *schema.Resource { Delete: schema.DefaultTimeout(90 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -62,10 +68,13 @@ and all following characters must be a dash, underscore, letter or digit.`, Description: `A human-readable description of the resource.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Set of label tags associated with the EdgeCache resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of label tags associated with the EdgeCache resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "public_key": { Type: schema.TypeList, @@ -130,6 +139,19 @@ See RFC 2104, Section 3 for more details on these recommendations.`, }, AtLeastOneOf: []string{"public_key", "validation_shared_keys"}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -155,12 +177,6 @@ func resourceNetworkServicesEdgeCacheKeysetCreate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkServicesEdgeCacheKeysetLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } publicKeysProp, err := expandNetworkServicesEdgeCacheKeysetPublicKey(d.Get("public_key"), d, config) if err != nil { return err @@ -173,6 +189,12 @@ func resourceNetworkServicesEdgeCacheKeysetCreate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("validation_shared_keys"); !tpgresource.IsEmptyValue(reflect.ValueOf(validationSharedKeysProp)) && (ok || !reflect.DeepEqual(v, validationSharedKeysProp)) { obj["validationSharedKeys"] = validationSharedKeysProp } + labelsProp, err := expandNetworkServicesEdgeCacheKeysetEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/global/edgeCacheKeysets?edgeCacheKeysetId={{name}}") if err != nil { @@ -282,6 +304,12 @@ func resourceNetworkServicesEdgeCacheKeysetRead(d *schema.ResourceData, meta int if err := d.Set("validation_shared_keys", flattenNetworkServicesEdgeCacheKeysetValidationSharedKeys(res["validationSharedKeys"], d, config)); err != nil { return fmt.Errorf("Error reading EdgeCacheKeyset: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkServicesEdgeCacheKeysetTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading EdgeCacheKeyset: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkServicesEdgeCacheKeysetEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading EdgeCacheKeyset: %s", err) + } return nil } @@ -308,12 +336,6 @@ func resourceNetworkServicesEdgeCacheKeysetUpdate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkServicesEdgeCacheKeysetLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } publicKeysProp, err := expandNetworkServicesEdgeCacheKeysetPublicKey(d.Get("public_key"), d, config) if err != nil { return err @@ -326,6 +348,12 @@ func resourceNetworkServicesEdgeCacheKeysetUpdate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("validation_shared_keys"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, validationSharedKeysProp)) { obj["validationSharedKeys"] = validationSharedKeysProp } + labelsProp, err := expandNetworkServicesEdgeCacheKeysetEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/global/edgeCacheKeysets/{{name}}") if err != nil { @@ -339,10 +367,6 @@ func resourceNetworkServicesEdgeCacheKeysetUpdate(d *schema.ResourceData, meta i updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("public_key") { updateMask = append(updateMask, "publicKeys") } @@ -350,6 +374,10 @@ func resourceNetworkServicesEdgeCacheKeysetUpdate(d *schema.ResourceData, meta i if d.HasChange("validation_shared_keys") { updateMask = append(updateMask, "validationSharedKeys") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -447,9 +475,9 @@ func resourceNetworkServicesEdgeCacheKeysetDelete(d *schema.ResourceData, meta i func resourceNetworkServicesEdgeCacheKeysetImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/edgeCacheKeysets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/edgeCacheKeysets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -469,7 +497,18 @@ func flattenNetworkServicesEdgeCacheKeysetDescription(v interface{}, d *schema.R } func flattenNetworkServicesEdgeCacheKeysetLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenNetworkServicesEdgeCacheKeysetPublicKey(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -526,19 +565,27 @@ func flattenNetworkServicesEdgeCacheKeysetValidationSharedKeysSecretVersion(v in return v } -func expandNetworkServicesEdgeCacheKeysetDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandNetworkServicesEdgeCacheKeysetLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenNetworkServicesEdgeCacheKeysetTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenNetworkServicesEdgeCacheKeysetEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandNetworkServicesEdgeCacheKeysetDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandNetworkServicesEdgeCacheKeysetPublicKey(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -614,3 +661,14 @@ func expandNetworkServicesEdgeCacheKeysetValidationSharedKeys(v interface{}, d t func expandNetworkServicesEdgeCacheKeysetValidationSharedKeysSecretVersion(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandNetworkServicesEdgeCacheKeysetEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/networkservices/resource_network_services_edge_cache_keyset_generated_test.go b/google/services/networkservices/resource_network_services_edge_cache_keyset_generated_test.go index ea0d41fc417..153b4107b4a 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_keyset_generated_test.go +++ b/google/services/networkservices/resource_network_services_edge_cache_keyset_generated_test.go @@ -49,7 +49,7 @@ func TestAccNetworkServicesEdgeCacheKeyset_networkServicesEdgeCacheKeysetBasicEx ResourceName: "google_network_services_edge_cache_keyset.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) @@ -92,7 +92,7 @@ func TestAccNetworkServicesEdgeCacheKeyset_networkServicesEdgeCacheKeysetDualTok ResourceName: "google_network_services_edge_cache_keyset.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkservices/resource_network_services_edge_cache_keyset_test.go b/google/services/networkservices/resource_network_services_edge_cache_keyset_test.go index 7f49f7ba75d..862c946ae35 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_keyset_test.go +++ b/google/services/networkservices/resource_network_services_edge_cache_keyset_test.go @@ -28,7 +28,7 @@ func TestAccNetworkServicesEdgeCacheKeyset_update(t *testing.T) { ResourceName: "google_network_services_edge_cache_keyset.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, { Config: testAccNetworkServicesEdgeCacheKeyset_update(context), @@ -37,7 +37,7 @@ func TestAccNetworkServicesEdgeCacheKeyset_update(t *testing.T) { ResourceName: "google_network_services_edge_cache_keyset.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkservices/resource_network_services_edge_cache_origin.go b/google/services/networkservices/resource_network_services_edge_cache_origin.go index ec563a32379..b45185924cc 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_origin.go +++ b/google/services/networkservices/resource_network_services_edge_cache_origin.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -49,6 +50,11 @@ func ResourceNetworkServicesEdgeCacheOrigin() *schema.Resource { Delete: schema.DefaultTimeout(120 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -111,10 +117,13 @@ The value of timeout.maxAttemptsTimeout dictates the timeout across all origins. A reference to a Topic resource.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Set of label tags associated with the EdgeCache resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of label tags associated with the EdgeCache resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "max_attempts": { Type: schema.TypeInt, @@ -331,6 +340,19 @@ If the response headers have already been written to the connection, the respons }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -356,12 +378,6 @@ func resourceNetworkServicesEdgeCacheOriginCreate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkServicesEdgeCacheOriginLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } originAddressProp, err := expandNetworkServicesEdgeCacheOriginOriginAddress(d.Get("origin_address"), d, config) if err != nil { return err @@ -422,6 +438,12 @@ func resourceNetworkServicesEdgeCacheOriginCreate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("origin_redirect"); !tpgresource.IsEmptyValue(reflect.ValueOf(originRedirectProp)) && (ok || !reflect.DeepEqual(v, originRedirectProp)) { obj["originRedirect"] = originRedirectProp } + labelsProp, err := expandNetworkServicesEdgeCacheOriginEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/global/edgeCacheOrigins?edgeCacheOriginId={{name}}") if err != nil { @@ -553,6 +575,12 @@ func resourceNetworkServicesEdgeCacheOriginRead(d *schema.ResourceData, meta int if err := d.Set("origin_redirect", flattenNetworkServicesEdgeCacheOriginOriginRedirect(res["originRedirect"], d, config)); err != nil { return fmt.Errorf("Error reading EdgeCacheOrigin: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkServicesEdgeCacheOriginTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading EdgeCacheOrigin: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkServicesEdgeCacheOriginEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading EdgeCacheOrigin: %s", err) + } return nil } @@ -579,12 +607,6 @@ func resourceNetworkServicesEdgeCacheOriginUpdate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkServicesEdgeCacheOriginLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } originAddressProp, err := expandNetworkServicesEdgeCacheOriginOriginAddress(d.Get("origin_address"), d, config) if err != nil { return err @@ -645,6 +667,12 @@ func resourceNetworkServicesEdgeCacheOriginUpdate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("origin_redirect"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, originRedirectProp)) { obj["originRedirect"] = originRedirectProp } + labelsProp, err := expandNetworkServicesEdgeCacheOriginEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/global/edgeCacheOrigins/{{name}}") if err != nil { @@ -658,10 +686,6 @@ func resourceNetworkServicesEdgeCacheOriginUpdate(d *schema.ResourceData, meta i updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("origin_address") { updateMask = append(updateMask, "originAddress") } @@ -701,6 +725,10 @@ func resourceNetworkServicesEdgeCacheOriginUpdate(d *schema.ResourceData, meta i if d.HasChange("origin_redirect") { updateMask = append(updateMask, "originRedirect") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -796,9 +824,9 @@ func resourceNetworkServicesEdgeCacheOriginDelete(d *schema.ResourceData, meta i func resourceNetworkServicesEdgeCacheOriginImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/edgeCacheOrigins/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/edgeCacheOrigins/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -818,7 +846,18 @@ func flattenNetworkServicesEdgeCacheOriginDescription(v interface{}, d *schema.R } func flattenNetworkServicesEdgeCacheOriginLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenNetworkServicesEdgeCacheOriginOriginAddress(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1018,19 +1057,27 @@ func flattenNetworkServicesEdgeCacheOriginOriginRedirectRedirectConditions(v int return v } -func expandNetworkServicesEdgeCacheOriginDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandNetworkServicesEdgeCacheOriginLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenNetworkServicesEdgeCacheOriginTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenNetworkServicesEdgeCacheOriginEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandNetworkServicesEdgeCacheOriginDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandNetworkServicesEdgeCacheOriginOriginAddress(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -1296,3 +1343,14 @@ func expandNetworkServicesEdgeCacheOriginOriginRedirect(v interface{}, d tpgreso func expandNetworkServicesEdgeCacheOriginOriginRedirectRedirectConditions(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandNetworkServicesEdgeCacheOriginEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/networkservices/resource_network_services_edge_cache_origin_generated_test.go b/google/services/networkservices/resource_network_services_edge_cache_origin_generated_test.go index 392f814dddb..984ebfe3c94 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_origin_generated_test.go +++ b/google/services/networkservices/resource_network_services_edge_cache_origin_generated_test.go @@ -49,7 +49,7 @@ func TestAccNetworkServicesEdgeCacheOrigin_networkServicesEdgeCacheOriginBasicEx ResourceName: "google_network_services_edge_cache_origin.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "timeout"}, + ImportStateVerifyIgnore: []string{"name", "timeout", "labels", "terraform_labels"}, }, }, }) @@ -84,7 +84,7 @@ func TestAccNetworkServicesEdgeCacheOrigin_networkServicesEdgeCacheOriginAdvance ResourceName: "google_network_services_edge_cache_origin.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "timeout"}, + ImportStateVerifyIgnore: []string{"name", "timeout", "labels", "terraform_labels"}, }, }, }) @@ -171,7 +171,7 @@ func TestAccNetworkServicesEdgeCacheOrigin_networkServicesEdgeCacheOriginV4authE ResourceName: "google_network_services_edge_cache_origin.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "timeout"}, + ImportStateVerifyIgnore: []string{"name", "timeout", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkservices/resource_network_services_edge_cache_origin_test.go b/google/services/networkservices/resource_network_services_edge_cache_origin_test.go index 651272bc60e..65ced3f9f77 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_origin_test.go +++ b/google/services/networkservices/resource_network_services_edge_cache_origin_test.go @@ -25,7 +25,7 @@ func TestAccNetworkServicesEdgeCacheOrigin_updateAndImport(t *testing.T) { ResourceName: "google_network_services_edge_cache_origin.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, { Config: testAccNetworkServicesEdgeCacheOrigin_update_1(name), @@ -34,7 +34,7 @@ func TestAccNetworkServicesEdgeCacheOrigin_updateAndImport(t *testing.T) { ResourceName: "google_network_services_edge_cache_origin.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkservices/resource_network_services_edge_cache_service.go b/google/services/networkservices/resource_network_services_edge_cache_service.go index 74b2d6c3358..290de3becc4 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_service.go +++ b/google/services/networkservices/resource_network_services_edge_cache_service.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourceNetworkServicesEdgeCacheService() *schema.Resource { Delete: schema.DefaultTimeout(60 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -930,10 +936,13 @@ Note that only "global" certificates with a "scope" of "EDGE_CACHE" can be attac }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Set of label tags associated with the EdgeCache resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of label tags associated with the EdgeCache resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "log_config": { Type: schema.TypeList, @@ -974,6 +983,12 @@ You must have at least one (1) edgeSslCertificate specified to enable this.`, If not set, the EdgeCacheService has no SSL policy configured, and will default to the "COMPATIBLE" policy.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "ipv4_addresses": { Type: schema.TypeList, Computed: true, @@ -990,6 +1005,13 @@ If not set, the EdgeCacheService has no SSL policy configured, and will default Type: schema.TypeString, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -1015,12 +1037,6 @@ func resourceNetworkServicesEdgeCacheServiceCreate(d *schema.ResourceData, meta } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkServicesEdgeCacheServiceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } disableQuicProp, err := expandNetworkServicesEdgeCacheServiceDisableQuic(d.Get("disable_quic"), d, config) if err != nil { return err @@ -1069,6 +1085,12 @@ func resourceNetworkServicesEdgeCacheServiceCreate(d *schema.ResourceData, meta } else if v, ok := d.GetOkExists("edge_security_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(edgeSecurityPolicyProp)) && (ok || !reflect.DeepEqual(v, edgeSecurityPolicyProp)) { obj["edgeSecurityPolicy"] = edgeSecurityPolicyProp } + labelsProp, err := expandNetworkServicesEdgeCacheServiceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/global/edgeCacheServices?edgeCacheServiceId={{name}}") if err != nil { @@ -1200,6 +1222,12 @@ func resourceNetworkServicesEdgeCacheServiceRead(d *schema.ResourceData, meta in if err := d.Set("edge_security_policy", flattenNetworkServicesEdgeCacheServiceEdgeSecurityPolicy(res["edgeSecurityPolicy"], d, config)); err != nil { return fmt.Errorf("Error reading EdgeCacheService: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkServicesEdgeCacheServiceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading EdgeCacheService: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkServicesEdgeCacheServiceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading EdgeCacheService: %s", err) + } return nil } @@ -1226,12 +1254,6 @@ func resourceNetworkServicesEdgeCacheServiceUpdate(d *schema.ResourceData, meta } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandNetworkServicesEdgeCacheServiceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } disableQuicProp, err := expandNetworkServicesEdgeCacheServiceDisableQuic(d.Get("disable_quic"), d, config) if err != nil { return err @@ -1280,6 +1302,12 @@ func resourceNetworkServicesEdgeCacheServiceUpdate(d *schema.ResourceData, meta } else if v, ok := d.GetOkExists("edge_security_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, edgeSecurityPolicyProp)) { obj["edgeSecurityPolicy"] = edgeSecurityPolicyProp } + labelsProp, err := expandNetworkServicesEdgeCacheServiceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/global/edgeCacheServices/{{name}}") if err != nil { @@ -1293,10 +1321,6 @@ func resourceNetworkServicesEdgeCacheServiceUpdate(d *schema.ResourceData, meta updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("disable_quic") { updateMask = append(updateMask, "disableQuic") } @@ -1328,6 +1352,10 @@ func resourceNetworkServicesEdgeCacheServiceUpdate(d *schema.ResourceData, meta if d.HasChange("edge_security_policy") { updateMask = append(updateMask, "edgeSecurityPolicy") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -1423,9 +1451,9 @@ func resourceNetworkServicesEdgeCacheServiceDelete(d *schema.ResourceData, meta func resourceNetworkServicesEdgeCacheServiceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/global/edgeCacheServices/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/global/edgeCacheServices/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1445,7 +1473,18 @@ func flattenNetworkServicesEdgeCacheServiceDescription(v interface{}, d *schema. } func flattenNetworkServicesEdgeCacheServiceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenNetworkServicesEdgeCacheServiceDisableQuic(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -2193,19 +2232,27 @@ func flattenNetworkServicesEdgeCacheServiceEdgeSecurityPolicy(v interface{}, d * return v } -func expandNetworkServicesEdgeCacheServiceDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandNetworkServicesEdgeCacheServiceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenNetworkServicesEdgeCacheServiceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenNetworkServicesEdgeCacheServiceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandNetworkServicesEdgeCacheServiceDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandNetworkServicesEdgeCacheServiceDisableQuic(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -3423,3 +3470,14 @@ func expandNetworkServicesEdgeCacheServiceLogConfigSampleRate(v interface{}, d t func expandNetworkServicesEdgeCacheServiceEdgeSecurityPolicy(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandNetworkServicesEdgeCacheServiceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/networkservices/resource_network_services_edge_cache_service_generated_test.go b/google/services/networkservices/resource_network_services_edge_cache_service_generated_test.go index c41d7d157ad..bad9b9ab54d 100644 --- a/google/services/networkservices/resource_network_services_edge_cache_service_generated_test.go +++ b/google/services/networkservices/resource_network_services_edge_cache_service_generated_test.go @@ -49,7 +49,7 @@ func TestAccNetworkServicesEdgeCacheService_networkServicesEdgeCacheServiceBasic ResourceName: "google_network_services_edge_cache_service.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) @@ -129,7 +129,7 @@ func TestAccNetworkServicesEdgeCacheService_networkServicesEdgeCacheServiceAdvan ResourceName: "google_network_services_edge_cache_service.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) @@ -343,7 +343,7 @@ func TestAccNetworkServicesEdgeCacheService_networkServicesEdgeCacheServiceDualT ResourceName: "google_network_services_edge_cache_service.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name"}, + ImportStateVerifyIgnore: []string{"name", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkservices/resource_network_services_gateway.go b/google/services/networkservices/resource_network_services_gateway.go index 57ff5acc7d4..ee3adf7a047 100644 --- a/google/services/networkservices/resource_network_services_gateway.go +++ b/google/services/networkservices/resource_network_services_gateway.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -163,6 +164,11 @@ func ResourceNetworkServicesGateway() *schema.Resource { Delete: schema.DefaultTimeout(30 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -222,10 +228,13 @@ For example: 'projects/*/locations/*/gatewaySecurityPolicies/swg-policy'. This policy is specific to gateways of type 'SECURE_WEB_GATEWAY'.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Set of label tags associated with the Gateway resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Set of label tags associated with the Gateway resource. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { Type: schema.TypeString, @@ -270,11 +279,24 @@ Currently, this field is specific to gateways of type 'SECURE_WEB_GATEWAY.`, Computed: true, Description: `Time the AccessPolicy was created in UTC.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "self_link": { Type: schema.TypeString, Computed: true, Description: `Server-defined URL of this resource.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -306,12 +328,6 @@ func resourceNetworkServicesGatewayCreate(d *schema.ResourceData, meta interface } obj := make(map[string]interface{}) - labelsProp, err := expandNetworkServicesGatewayLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandNetworkServicesGatewayDescription(d.Get("description"), d, config) if err != nil { return err @@ -372,6 +388,12 @@ func resourceNetworkServicesGatewayCreate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("certificate_urls"); !tpgresource.IsEmptyValue(reflect.ValueOf(certificateUrlsProp)) && (ok || !reflect.DeepEqual(v, certificateUrlsProp)) { obj["certificateUrls"] = certificateUrlsProp } + labelsProp, err := expandNetworkServicesGatewayEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/{{location}}/gateways?gatewayId={{name}}") if err != nil { @@ -515,6 +537,12 @@ func resourceNetworkServicesGatewayRead(d *schema.ResourceData, meta interface{} if err := d.Set("certificate_urls", flattenNetworkServicesGatewayCertificateUrls(res["certificateUrls"], d, config)); err != nil { return fmt.Errorf("Error reading Gateway: %s", err) } + if err := d.Set("terraform_labels", flattenNetworkServicesGatewayTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Gateway: %s", err) + } + if err := d.Set("effective_labels", flattenNetworkServicesGatewayEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Gateway: %s", err) + } return nil } @@ -535,12 +563,6 @@ func resourceNetworkServicesGatewayUpdate(d *schema.ResourceData, meta interface billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandNetworkServicesGatewayLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandNetworkServicesGatewayDescription(d.Get("description"), d, config) if err != nil { return err @@ -553,6 +575,12 @@ func resourceNetworkServicesGatewayUpdate(d *schema.ResourceData, meta interface } else if v, ok := d.GetOkExists("server_tls_policy"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, serverTlsPolicyProp)) { obj["serverTlsPolicy"] = serverTlsPolicyProp } + labelsProp, err := expandNetworkServicesGatewayEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NetworkServicesBasePath}}projects/{{project}}/locations/{{location}}/gateways/{{name}}") if err != nil { @@ -562,10 +590,6 @@ func resourceNetworkServicesGatewayUpdate(d *schema.ResourceData, meta interface log.Printf("[DEBUG] Updating Gateway %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("description") { updateMask = append(updateMask, "description") } @@ -573,6 +597,10 @@ func resourceNetworkServicesGatewayUpdate(d *schema.ResourceData, meta interface if d.HasChange("server_tls_policy") { updateMask = append(updateMask, "serverTlsPolicy") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -683,9 +711,9 @@ func resourceNetworkServicesGatewayDelete(d *schema.ResourceData, meta interface func resourceNetworkServicesGatewayImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/gateways/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/gateways/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -718,7 +746,18 @@ func flattenNetworkServicesGatewayUpdateTime(v interface{}, d *schema.ResourceDa } func flattenNetworkServicesGatewayLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenNetworkServicesGatewayDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -761,15 +800,23 @@ func flattenNetworkServicesGatewayCertificateUrls(v interface{}, d *schema.Resou return v } -func expandNetworkServicesGatewayLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenNetworkServicesGatewayTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenNetworkServicesGatewayEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandNetworkServicesGatewayDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -811,3 +858,14 @@ func expandNetworkServicesGatewayGatewaySecurityPolicy(v interface{}, d tpgresou func expandNetworkServicesGatewayCertificateUrls(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandNetworkServicesGatewayEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/networkservices/resource_network_services_gateway_generated_test.go b/google/services/networkservices/resource_network_services_gateway_generated_test.go index bf47fe6d0c9..68f666831d3 100644 --- a/google/services/networkservices/resource_network_services_gateway_generated_test.go +++ b/google/services/networkservices/resource_network_services_gateway_generated_test.go @@ -49,7 +49,7 @@ func TestAccNetworkServicesGateway_networkServicesGatewayBasicExample(t *testing ResourceName: "google_network_services_gateway.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) @@ -85,7 +85,7 @@ func TestAccNetworkServicesGateway_networkServicesGatewayAdvancedExample(t *test ResourceName: "google_network_services_gateway.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) @@ -125,7 +125,7 @@ func TestAccNetworkServicesGateway_networkServicesGatewaySecureWebProxyExample(t ResourceName: "google_network_services_gateway.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "delete_swg_autogen_router_on_destroy"}, + ImportStateVerifyIgnore: []string{"name", "location", "delete_swg_autogen_router_on_destroy", "labels", "terraform_labels"}, }, }, }) @@ -217,7 +217,7 @@ func TestAccNetworkServicesGateway_networkServicesGatewayMultipleSwpSameNetworkE ResourceName: "google_network_services_gateway.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location", "delete_swg_autogen_router_on_destroy"}, + ImportStateVerifyIgnore: []string{"name", "location", "delete_swg_autogen_router_on_destroy", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/networkservices/resource_network_services_gateway_test.go b/google/services/networkservices/resource_network_services_gateway_test.go index 4452a710905..73adaa6b05d 100644 --- a/google/services/networkservices/resource_network_services_gateway_test.go +++ b/google/services/networkservices/resource_network_services_gateway_test.go @@ -25,17 +25,19 @@ func TestAccNetworkServicesGateway_update(t *testing.T) { Config: testAccNetworkServicesGateway_basic(gatewayName), }, { - ResourceName: "google_network_services_gateway.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_services_gateway.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccNetworkServicesGateway_update(gatewayName), }, { - ResourceName: "google_network_services_gateway.foobar", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_network_services_gateway.foobar", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/notebooks/resource_notebooks_environment.go b/google/services/notebooks/resource_notebooks_environment.go index 5543cfe340e..ce0d2065e8a 100644 --- a/google/services/notebooks/resource_notebooks_environment.go +++ b/google/services/notebooks/resource_notebooks_environment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceNotebooksEnvironment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -448,9 +453,9 @@ func resourceNotebooksEnvironmentDelete(d *schema.ResourceData, meta interface{} func resourceNotebooksEnvironmentImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/environments/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/environments/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/notebooks/resource_notebooks_instance.go b/google/services/notebooks/resource_notebooks_instance.go index 0c0f9e4adee..7aa89e6fc47 100644 --- a/google/services/notebooks/resource_notebooks_instance.go +++ b/google/services/notebooks/resource_notebooks_instance.go @@ -18,12 +18,14 @@ package notebooks import ( + "context" "fmt" "log" "reflect" "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -31,23 +33,6 @@ import ( "github.com/hashicorp/terraform-provider-google/google/verify" ) -const notebooksInstanceGoogleProvidedLabel = "goog-caip-notebook" - -func NotebooksInstanceLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { - // Suppress diffs for the label provided by Google - if strings.Contains(k, notebooksInstanceGoogleProvidedLabel) && new == "" { - return true - } - - // Let diff be determined by labels (above) - if strings.Contains(k, "labels.%") { - return true - } - - // For other keys, don't suppress diff. - return false -} - func ResourceNotebooksInstance() *schema.Resource { return &schema.Resource{ Create: resourceNotebooksInstanceCreate, @@ -65,6 +50,20 @@ func ResourceNotebooksInstance() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + SchemaVersion: 1, + + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceNotebooksInstanceResourceV0().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceNotebooksInstanceUpgradeV0, + Version: 0, + }, + }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -212,12 +211,14 @@ your VM instance's service account can use the instance.`, Format: projects/{project_id}/locations/{location}/keyRings/{key_ring_id}/cryptoKeys/{key_id}`, }, "labels": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - DiffSuppressFunc: NotebooksInstanceLabelDiffSuppress, + Type: schema.TypeMap, + Optional: true, Description: `Labels to apply to this instance. These can be later modified by the setLabels method. -An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "metadata": { @@ -420,6 +421,12 @@ Format: projects/{project_id}`, Optional: true, Description: `Instance creation time`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "proxy_uri": { Type: schema.TypeString, Computed: true, @@ -433,6 +440,13 @@ the population of this value.`, Computed: true, Description: `The state of this instance.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -590,12 +604,6 @@ func resourceNotebooksInstanceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("subnet"); !tpgresource.IsEmptyValue(reflect.ValueOf(subnetProp)) && (ok || !reflect.DeepEqual(v, subnetProp)) { obj["subnet"] = subnetProp } - labelsProp, err := expandNotebooksInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } tagsProp, err := expandNotebooksInstanceTags(d.Get("tags"), d, config) if err != nil { return err @@ -620,6 +628,12 @@ func resourceNotebooksInstanceCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("container_image"); !tpgresource.IsEmptyValue(reflect.ValueOf(containerImageProp)) && (ok || !reflect.DeepEqual(v, containerImageProp)) { obj["containerImage"] = containerImageProp } + labelsProp, err := expandNotebooksInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{NotebooksBasePath}}projects/{{project}}/locations/{{location}}/instances?instanceId={{name}}") if err != nil { @@ -791,6 +805,12 @@ func resourceNotebooksInstanceRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("update_time", flattenNotebooksInstanceUpdateTime(res["updateTime"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenNotebooksInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenNotebooksInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -812,14 +832,14 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e d.Partial(true) - if d.HasChange("labels") { + if d.HasChange("metadata") { obj := make(map[string]interface{}) - labelsProp, err := expandNotebooksInstanceLabels(d.Get("labels"), d, config) + metadataProp, err := expandNotebooksInstanceMetadata(d.Get("metadata"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp + } else if v, ok := d.GetOkExists("metadata"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, metadataProp)) { + obj["metadata"] = metadataProp } obj, err = resourceNotebooksInstanceUpdateEncoder(d, meta, obj) @@ -827,7 +847,7 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e return err } - url, err := tpgresource.ReplaceVars(d, config, "{{NotebooksBasePath}}projects/{{project}}/locations/{{location}}/instances/{{name}}:setLabels") + url, err := tpgresource.ReplaceVars(d, config, "{{NotebooksBasePath}}projects/{{project}}/locations/{{location}}/instances/{{name}}:updateMetadataItems") if err != nil { return err } @@ -859,14 +879,14 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e return err } } - if d.HasChange("metadata") { + if d.HasChange("effective_labels") { obj := make(map[string]interface{}) - metadataProp, err := expandNotebooksInstanceMetadata(d.Get("metadata"), d, config) + labelsProp, err := expandNotebooksInstanceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("metadata"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, metadataProp)) { - obj["metadata"] = metadataProp + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp } obj, err = resourceNotebooksInstanceUpdateEncoder(d, meta, obj) @@ -874,7 +894,7 @@ func resourceNotebooksInstanceUpdate(d *schema.ResourceData, meta interface{}) e return err } - url, err := tpgresource.ReplaceVars(d, config, "{{NotebooksBasePath}}projects/{{project}}/locations/{{location}}/instances/{{name}}:updateMetadataItems") + url, err := tpgresource.ReplaceVars(d, config, "{{NotebooksBasePath}}projects/{{project}}/locations/{{location}}/instances/{{name}}:setLabels") if err != nil { return err } @@ -968,9 +988,9 @@ func resourceNotebooksInstanceDelete(d *schema.ResourceData, meta interface{}) e func resourceNotebooksInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1143,7 +1163,18 @@ func flattenNotebooksInstanceSubnet(v interface{}, d *schema.ResourceData, confi } func flattenNotebooksInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenNotebooksInstanceTags(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1158,6 +1189,25 @@ func flattenNotebooksInstanceUpdateTime(v interface{}, d *schema.ResourceData, c return v } +func flattenNotebooksInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenNotebooksInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandNotebooksInstanceMachineType(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1358,17 +1408,6 @@ func expandNotebooksInstanceSubnet(v interface{}, d tpgresource.TerraformResourc return v, nil } -func expandNotebooksInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandNotebooksInstanceTags(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1463,6 +1502,17 @@ func expandNotebooksInstanceContainerImageTag(v interface{}, d tpgresource.Terra return v, nil } +func expandNotebooksInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceNotebooksInstanceUpdateEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { // Update requests use "items" as the api name instead of "metadata" // https://cloud.google.com/vertex-ai/docs/workbench/reference/rest/v1/projects.locations.instances/updateMetadataItems @@ -1472,3 +1522,411 @@ func resourceNotebooksInstanceUpdateEncoder(d *schema.ResourceData, meta interfa } return obj, nil } + +const notebooksInstanceGoogleProvidedLabel = "goog-caip-notebook" + +func NotebooksInstanceLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { + // Suppress diffs for the label provided by Google + if strings.Contains(k, notebooksInstanceGoogleProvidedLabel) && new == "" { + return true + } + + // Let diff be determined by labels (above) + if strings.Contains(k, "labels.%") { + return true + } + + // For other keys, don't suppress diff. + return false +} + +func resourceNotebooksInstanceResourceV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `A reference to the zone where the machine resides.`, + }, + "machine_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `A reference to a machine type which defines VM kind.`, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name specified for the Notebook instance.`, + }, + "accelerator_config": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `The hardware accelerator used on this instance. If you use accelerators, +make sure that your configuration has enough vCPUs and memory to support the +machineType you have selected.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "core_count": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + Description: `Count of cores of this accelerator.`, + }, + "type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"ACCELERATOR_TYPE_UNSPECIFIED", "NVIDIA_TESLA_K80", "NVIDIA_TESLA_P100", "NVIDIA_TESLA_V100", "NVIDIA_TESLA_P4", "NVIDIA_TESLA_T4", "NVIDIA_TESLA_T4_VWS", "NVIDIA_TESLA_P100_VWS", "NVIDIA_TESLA_P4_VWS", "NVIDIA_TESLA_A100", "TPU_V2", "TPU_V3"}), + Description: `Type of this accelerator. Possible values: ["ACCELERATOR_TYPE_UNSPECIFIED", "NVIDIA_TESLA_K80", "NVIDIA_TESLA_P100", "NVIDIA_TESLA_V100", "NVIDIA_TESLA_P4", "NVIDIA_TESLA_T4", "NVIDIA_TESLA_T4_VWS", "NVIDIA_TESLA_P100_VWS", "NVIDIA_TESLA_P4_VWS", "NVIDIA_TESLA_A100", "TPU_V2", "TPU_V3"]`, + }, + }, + }, + }, + "boot_disk_size_gb": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Description: `The size of the boot disk in GB attached to this instance, +up to a maximum of 64000 GB (64 TB). The minimum recommended value is 100 GB. +If not specified, this defaults to 100.`, + }, + "boot_disk_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"DISK_TYPE_UNSPECIFIED", "PD_STANDARD", "PD_SSD", "PD_BALANCED", "PD_EXTREME", ""}), + Description: `Possible disk types for notebook instances. Possible values: ["DISK_TYPE_UNSPECIFIED", "PD_STANDARD", "PD_SSD", "PD_BALANCED", "PD_EXTREME"]`, + }, + "container_image": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Use a container image to start the notebook instance.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "repository": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The path to the container image repository. +For example: gcr.io/{project_id}/{imageName}`, + }, + "tag": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The tag of the container image. If not specified, this defaults to the latest tag.`, + }, + }, + }, + ExactlyOneOf: []string{"vm_image", "container_image"}, + }, + "custom_gpu_driver_path": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Specify a custom Cloud Storage path where the GPU driver is stored. +If not specified, we'll automatically choose from official GPU drivers.`, + }, + "data_disk_size_gb": { + Type: schema.TypeInt, + Optional: true, + ForceNew: true, + Description: `The size of the data disk in GB attached to this instance, +up to a maximum of 64000 GB (64 TB). +You can choose the size of the data disk based on how big your notebooks and data are. +If not specified, this defaults to 100.`, + }, + "data_disk_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"DISK_TYPE_UNSPECIFIED", "PD_STANDARD", "PD_SSD", "PD_BALANCED", "PD_EXTREME", ""}), + Description: `Possible disk types for notebook instances. Possible values: ["DISK_TYPE_UNSPECIFIED", "PD_STANDARD", "PD_SSD", "PD_BALANCED", "PD_EXTREME"]`, + }, + "disk_encryption": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"DISK_ENCRYPTION_UNSPECIFIED", "GMEK", "CMEK", ""}), + DiffSuppressFunc: tpgresource.EmptyOrDefaultStringSuppress("DISK_ENCRYPTION_UNSPECIFIED"), + Description: `Disk encryption method used on the boot and data disks, defaults to GMEK. Possible values: ["DISK_ENCRYPTION_UNSPECIFIED", "GMEK", "CMEK"]`, + }, + "install_gpu_driver": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Whether the end user authorizes Google Cloud to install GPU driver +on this instance. If this field is empty or set to false, the GPU driver +won't be installed. Only applicable to instances with GPUs.`, + }, + "instance_owners": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `The list of owners of this instance after creation. +Format: alias@example.com. +Currently supports one owner only. +If not specified, all of the service account users of +your VM instance's service account can use the instance.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "kms_key": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `The KMS key used to encrypt the disks, only applicable if diskEncryption is CMEK. +Format: projects/{project_id}/locations/{location}/keyRings/{key_ring_id}/cryptoKeys/{key_id}`, + }, + "labels": { + Type: schema.TypeMap, + Computed: true, + Optional: true, + DiffSuppressFunc: NotebooksInstanceLabelDiffSuppress, + Description: `Labels to apply to this instance. These can be later modified by the setLabels method. +An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "metadata": { + Type: schema.TypeMap, + Optional: true, + Description: `Custom metadata to apply to this instance. +An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "network": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name of the VPC that this instance is in. +Format: projects/{project_id}/global/networks/{network_id}`, + }, + "nic_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"UNSPECIFIED_NIC_TYPE", "VIRTIO_NET", "GVNIC", ""}), + Description: `The type of vNIC driver. Possible values: ["UNSPECIFIED_NIC_TYPE", "VIRTIO_NET", "GVNIC"]`, + }, + "no_proxy_access": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `The notebook instance will not register with the proxy..`, + }, + "no_public_ip": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `No public IP will be assigned to this instance.`, + }, + "no_remove_data_disk": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `If true, the data disk will not be auto deleted when deleting the instance.`, + }, + "post_startup_script": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Path to a Bash script that automatically runs after a +notebook instance fully boots up. The path must be a URL +or Cloud Storage path (gs://path-to-file/file-name).`, + }, + "reservation_affinity": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Reservation Affinity for consuming Zonal reservation.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "consume_reservation_type": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidateEnum([]string{"NO_RESERVATION", "ANY_RESERVATION", "SPECIFIC_RESERVATION"}), + Description: `The type of Compute Reservation. Possible values: ["NO_RESERVATION", "ANY_RESERVATION", "SPECIFIC_RESERVATION"]`, + }, + "key": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Corresponds to the label key of reservation resource.`, + }, + "values": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Corresponds to the label values of reservation resource.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + }, + }, + }, + "service_account": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + Description: `The service account on this instance, giving access to other +Google Cloud services. You can use any service account within +the same project, but you must have the service account user +permission to use the instance. If not specified, +the Compute Engine default service account is used.`, + }, + "service_account_scopes": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Optional. The URIs of service account scopes to be included in Compute Engine instances. +If not specified, the following scopes are defined: +- https://www.googleapis.com/auth/cloud-platform +- https://www.googleapis.com/auth/userinfo.email`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "shielded_instance_config": { + Type: schema.TypeList, + Computed: true, + Optional: true, + ForceNew: true, + Description: `A set of Shielded Instance options. Check [Images using supported Shielded VM features] +Not all combinations are valid`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enable_integrity_monitoring": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Defines whether the instance has integrity monitoring enabled. Enables monitoring and attestation of the +boot integrity of the instance. The attestation is performed against the integrity policy baseline. +This baseline is initially derived from the implicitly trusted boot image when the instance is created. +Enabled by default.`, + Default: true, + }, + "enable_secure_boot": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Defines whether the instance has Secure Boot enabled. Secure Boot helps ensure that the system only runs +authentic software by verifying the digital signature of all boot components, and halting the boot process +if signature verification fails. +Disabled by default.`, + }, + "enable_vtpm": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + Description: `Defines whether the instance has the vTPM enabled. +Enabled by default.`, + Default: true, + }, + }, + }, + }, + "subnet": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ForceNew: true, + DiffSuppressFunc: tpgresource.CompareSelfLinkOrResourceName, + Description: `The name of the subnet that this instance is in. +Format: projects/{project_id}/regions/{region}/subnetworks/{subnetwork_id}`, + }, + "tags": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `The Compute Engine tags to add to instance.`, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "vm_image": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Description: `Use a Compute Engine VM image to start the notebook instance.`, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "project": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the Google Cloud project that this VM image belongs to. +Format: projects/{project_id}`, + }, + "image_family": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Use this VM image family to find the image; the newest image in this family will be used.`, + }, + "image_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Description: `Use VM image name to find the image.`, + }, + }, + }, + ExactlyOneOf: []string{"vm_image", "container_image"}, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `Instance creation time`, + }, + "proxy_uri": { + Type: schema.TypeString, + Computed: true, + Description: `The proxy endpoint that is used to access the Jupyter notebook. +Only returned when the resource is in a 'PROVISIONED' state. If +needed you can utilize 'terraform apply -refresh-only' to await +the population of this value.`, + }, + "state": { + Type: schema.TypeString, + Computed: true, + Description: `The state of this instance.`, + }, + "update_time": { + Type: schema.TypeString, + Computed: true, + Optional: true, + Description: `Instance update time.`, + }, + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + }, + }, + UseJSONNumber: true, + } +} + +func ResourceNotebooksInstanceUpgradeV0(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + return tpgresource.LabelsStateUpgrade(rawState, notebooksInstanceGoogleProvidedLabel) +} diff --git a/google/services/notebooks/resource_notebooks_instance_generated_test.go b/google/services/notebooks/resource_notebooks_instance_generated_test.go index bc013828149..d3fc985e3b1 100644 --- a/google/services/notebooks/resource_notebooks_instance_generated_test.go +++ b/google/services/notebooks/resource_notebooks_instance_generated_test.go @@ -50,7 +50,7 @@ func TestAccNotebooksInstance_notebookInstanceBasicExample(t *testing.T) { ResourceName: "google_notebooks_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location"}, + ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location", "labels", "terraform_labels"}, }, }, }) @@ -89,7 +89,7 @@ func TestAccNotebooksInstance_notebookInstanceBasicContainerExample(t *testing.T ResourceName: "google_notebooks_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location"}, + ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location", "labels", "terraform_labels"}, }, }, }) @@ -132,7 +132,7 @@ func TestAccNotebooksInstance_notebookInstanceBasicGpuExample(t *testing.T) { ResourceName: "google_notebooks_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location"}, + ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location", "labels", "terraform_labels"}, }, }, }) @@ -178,7 +178,7 @@ func TestAccNotebooksInstance_notebookInstanceFullExample(t *testing.T) { ResourceName: "google_notebooks_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location"}, + ImportStateVerifyIgnore: []string{"name", "instance_owners", "boot_disk_type", "boot_disk_size_gb", "data_disk_type", "data_disk_size_gb", "no_remove_data_disk", "metadata", "vm_image", "container_image", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/notebooks/resource_notebooks_instance_test.go b/google/services/notebooks/resource_notebooks_instance_test.go index c1faf22c20d..213e4436cc7 100644 --- a/google/services/notebooks/resource_notebooks_instance_test.go +++ b/google/services/notebooks/resource_notebooks_instance_test.go @@ -57,7 +57,7 @@ func TestAccNotebooksInstance_update(t *testing.T) { ResourceName: "google_notebooks_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"vm_image", "metadata"}, + ImportStateVerifyIgnore: []string{"vm_image", "metadata", "labels", "terraform_labels"}, }, { Config: testAccNotebooksInstance_update(context, false), @@ -66,7 +66,7 @@ func TestAccNotebooksInstance_update(t *testing.T) { ResourceName: "google_notebooks_instance.instance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"vm_image", "metadata"}, + ImportStateVerifyIgnore: []string{"vm_image", "metadata", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/notebooks/resource_notebooks_location.go b/google/services/notebooks/resource_notebooks_location.go index 86ab37ad40a..9a193238b9e 100644 --- a/google/services/notebooks/resource_notebooks_location.go +++ b/google/services/notebooks/resource_notebooks_location.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceNotebooksLocation() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -318,9 +323,9 @@ func resourceNotebooksLocationDelete(d *schema.ResourceData, meta interface{}) e func resourceNotebooksLocationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/notebooks/resource_notebooks_runtime.go b/google/services/notebooks/resource_notebooks_runtime.go index 578f9466527..8529ca308e6 100644 --- a/google/services/notebooks/resource_notebooks_runtime.go +++ b/google/services/notebooks/resource_notebooks_runtime.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -70,6 +71,10 @@ func ResourceNotebooksRuntime() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -934,9 +939,9 @@ func resourceNotebooksRuntimeDelete(d *schema.ResourceData, meta interface{}) er func resourceNotebooksRuntimeImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/runtimes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/runtimes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/osconfig/resource_os_config_os_policy_assignment.go b/google/services/osconfig/resource_os_config_os_policy_assignment.go index bf1e3fc0480..1d5db0b15a6 100644 --- a/google/services/osconfig/resource_os_config_os_policy_assignment.go +++ b/google/services/osconfig/resource_os_config_os_policy_assignment.go @@ -9,6 +9,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -32,6 +33,10 @@ func ResourceOSConfigOSPolicyAssignment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "instance_filter": { Type: schema.TypeList, diff --git a/google/services/osconfig/resource_os_config_patch_deployment.go b/google/services/osconfig/resource_os_config_patch_deployment.go index e295fd31010..813176582d3 100644 --- a/google/services/osconfig/resource_os_config_patch_deployment.go +++ b/google/services/osconfig/resource_os_config_patch_deployment.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -46,6 +47,10 @@ func ResourceOSConfigPatchDeployment() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "instance_filter": { Type: schema.TypeList, diff --git a/google/services/oslogin/resource_os_login_ssh_public_key.go b/google/services/oslogin/resource_os_login_ssh_public_key.go index 3f4fa642ff4..5df8f9acbdd 100644 --- a/google/services/oslogin/resource_os_login_ssh_public_key.go +++ b/google/services/oslogin/resource_os_login_ssh_public_key.go @@ -315,8 +315,8 @@ func resourceOSLoginSSHPublicKeyDelete(d *schema.ResourceData, meta interface{}) func resourceOSLoginSSHPublicKeyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "users/(?P[^/]+)/sshPublicKeys/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^users/(?P[^/]+)/sshPublicKeys/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/privateca/data_source_certificate_authority.go b/google/services/privateca/data_source_certificate_authority.go index 5976618a7c7..08292755cb5 100644 --- a/google/services/privateca/data_source_certificate_authority.go +++ b/google/services/privateca/data_source_certificate_authority.go @@ -47,6 +47,10 @@ func dataSourcePrivatecaCertificateAuthorityRead(d *schema.ResourceData, meta in return err } + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + // pem_csr is only applicable for SUBORDINATE CertificateAuthorities when their state is AWAITING_USER_ACTIVATION if d.Get("type") == "SUBORDINATE" && d.Get("state") == "AWAITING_USER_ACTIVATION" { url, err := tpgresource.ReplaceVars(d, config, "{{PrivatecaBasePath}}projects/{{project}}/locations/{{location}}/caPools/{{pool}}/certificateAuthorities/{{certificate_authority_id}}:fetch") @@ -75,7 +79,7 @@ func dataSourcePrivatecaCertificateAuthorityRead(d *schema.ResourceData, meta in UserAgent: userAgent, }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("PrivatecaCertificateAuthority %q", d.Id())) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("PrivatecaCertificateAuthority %q", d.Id()), url) } if err := d.Set("pem_csr", res["pemCsr"]); err != nil { return fmt.Errorf("Error fetching CertificateAuthority: %s", err) diff --git a/google/services/privateca/data_source_certificate_authority_test.go b/google/services/privateca/data_source_certificate_authority_test.go index e09331fde08..506e44d6dca 100644 --- a/google/services/privateca/data_source_certificate_authority_test.go +++ b/google/services/privateca/data_source_certificate_authority_test.go @@ -83,6 +83,9 @@ resource "google_privateca_certificate_authority" "default" { key_spec { algorithm = "RSA_PKCS1_4096_SHA256" } + labels = { + my-label = "my-label-value" + } } data "google_privateca_certificate_authority" "default" { diff --git a/google/services/privateca/resource_privateca_ca_pool.go b/google/services/privateca/resource_privateca_ca_pool.go index ad2301e5cf9..82c1e0d49ba 100644 --- a/google/services/privateca/resource_privateca_ca_pool.go +++ b/google/services/privateca/resource_privateca_ca_pool.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourcePrivatecaCaPool() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -550,7 +556,11 @@ expires before a Certificate's requested maximumLifetime, the effective lifetime Description: `Labels with user-defined metadata. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": -"1.3kg", "count": "3" }.`, +"1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "publishing_options": { @@ -587,6 +597,19 @@ will be published in PEM. Possible values: ["PEM", "DER"]`, }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -624,10 +647,10 @@ func resourcePrivatecaCaPoolCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("publishing_options"); !tpgresource.IsEmptyValue(reflect.ValueOf(publishingOptionsProp)) && (ok || !reflect.DeepEqual(v, publishingOptionsProp)) { obj["publishingOptions"] = publishingOptionsProp } - labelsProp, err := expandPrivatecaCaPoolLabels(d.Get("labels"), d, config) + labelsProp, err := expandPrivatecaCaPoolEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -747,6 +770,12 @@ func resourcePrivatecaCaPoolRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("labels", flattenPrivatecaCaPoolLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading CaPool: %s", err) } + if err := d.Set("terraform_labels", flattenPrivatecaCaPoolTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CaPool: %s", err) + } + if err := d.Set("effective_labels", flattenPrivatecaCaPoolEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CaPool: %s", err) + } return nil } @@ -779,10 +808,10 @@ func resourcePrivatecaCaPoolUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("publishing_options"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, publishingOptionsProp)) { obj["publishingOptions"] = publishingOptionsProp } - labelsProp, err := expandPrivatecaCaPoolLabels(d.Get("labels"), d, config) + labelsProp, err := expandPrivatecaCaPoolEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -802,7 +831,7 @@ func resourcePrivatecaCaPoolUpdate(d *schema.ResourceData, meta interface{}) err updateMask = append(updateMask, "publishingOptions") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -900,9 +929,9 @@ func resourcePrivatecaCaPoolDelete(d *schema.ResourceData, meta interface{}) err func resourcePrivatecaCaPoolImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/caPools/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/caPools/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1138,6 +1167,36 @@ func flattenPrivatecaCaPoolPublishingOptionsEncodingFormat(v interface{}, d *sch } func flattenPrivatecaCaPoolLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPrivatecaCaPoolTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPrivatecaCaPoolEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1511,7 +1570,7 @@ func expandPrivatecaCaPoolPublishingOptionsEncodingFormat(v interface{}, d tpgre return v, nil } -func expandPrivatecaCaPoolLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandPrivatecaCaPoolEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/privateca/resource_privateca_ca_pool_generated_test.go b/google/services/privateca/resource_privateca_ca_pool_generated_test.go index 4d9939a5958..3982c8c7026 100644 --- a/google/services/privateca/resource_privateca_ca_pool_generated_test.go +++ b/google/services/privateca/resource_privateca_ca_pool_generated_test.go @@ -49,7 +49,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolBasicExample(t *testing.T) { ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) @@ -91,7 +91,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolAllFieldsExample(t *testing.T) { ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/privateca/resource_privateca_ca_pool_test.go b/google/services/privateca/resource_privateca_ca_pool_test.go index 605c532abab..7dd90dad7ad 100644 --- a/google/services/privateca/resource_privateca_ca_pool_test.go +++ b/google/services/privateca/resource_privateca_ca_pool_test.go @@ -28,7 +28,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolUpdate(t *testing.T) { ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, { Config: testAccPrivatecaCaPool_privatecaCapoolEnd(context), @@ -37,7 +37,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolUpdate(t *testing.T) { ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, { Config: testAccPrivatecaCaPool_privatecaCapoolStart(context), @@ -46,7 +46,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolUpdate(t *testing.T) { ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) @@ -235,7 +235,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolEmptyBaseline(t *testing.T) { ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) @@ -299,7 +299,7 @@ func TestAccPrivatecaCaPool_privatecaCapoolEmptyPublishingOptions(t *testing.T) ResourceName: "google_privateca_ca_pool.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "location"}, + ImportStateVerifyIgnore: []string{"name", "location", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/privateca/resource_privateca_certificate.go b/google/services/privateca/resource_privateca_certificate.go index 2de606f5dac..7e3876ffbe1 100644 --- a/google/services/privateca/resource_privateca_certificate.go +++ b/google/services/privateca/resource_privateca_certificate.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourcePrivatecaCertificate() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, @@ -623,10 +629,14 @@ leading period (like '.example.com')`, ExactlyOneOf: []string{"pem_csr", "config"}, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Labels with user-defined metadata to apply to this resource.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Labels with user-defined metadata to apply to this resource. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "lifetime": { Type: schema.TypeString, @@ -686,153 +696,6 @@ fractional digits, terminated by 's'. Example: "3.5s".`, }, }, }, - "config_values": { - Type: schema.TypeList, - Computed: true, - Deprecated: "`config_values` is deprecated and will be removed in a future release. Use `x509_description` instead.", - Description: `Describes some of the technical fields in a certificate.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "key_usage": { - Type: schema.TypeList, - Computed: true, - Description: `Indicates the intended use for keys that correspond to a certificate.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "base_key_usage": { - Type: schema.TypeList, - Computed: true, - Description: `Describes high-level ways in which a key may be used.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "key_usage_options": { - Type: schema.TypeList, - Computed: true, - Description: `Describes high-level ways in which a key may be used.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "cert_sign": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used to sign certificates.`, - }, - "content_commitment": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used for cryptographic commitments. Note that this may also be referred to as "non-repudiation".`, - }, - "crl_sign": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used sign certificate revocation lists.`, - }, - "data_encipherment": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used to encipher data.`, - }, - "decipher_only": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used to decipher only.`, - }, - "digital_signature": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used for digital signatures.`, - }, - "encipher_only": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used to encipher only.`, - }, - "key_agreement": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used in a key agreement protocol.`, - }, - "key_encipherment": { - Type: schema.TypeBool, - Computed: true, - Description: `The key may be used to encipher other keys.`, - }, - }, - }, - }, - }, - }, - }, - "extended_key_usage": { - Type: schema.TypeList, - Computed: true, - Description: `Describes high-level ways in which a key may be used.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "client_auth": { - Type: schema.TypeBool, - Computed: true, - Description: `Corresponds to OID 1.3.6.1.5.5.7.3.2. Officially described as "TLS WWW client authentication", though regularly used for non-WWW TLS.`, - }, - "code_signing": { - Type: schema.TypeBool, - Computed: true, - Description: `Corresponds to OID 1.3.6.1.5.5.7.3.3. Officially described as "Signing of downloadable executable code client authentication".`, - }, - "email_protection": { - Type: schema.TypeBool, - Computed: true, - Description: `Corresponds to OID 1.3.6.1.5.5.7.3.4. Officially described as "Email protection".`, - }, - "ocsp_signing": { - Type: schema.TypeBool, - Computed: true, - Description: `Corresponds to OID 1.3.6.1.5.5.7.3.9. Officially described as "Signing OCSP responses".`, - }, - "server_auth": { - Type: schema.TypeBool, - Computed: true, - Description: `Corresponds to OID 1.3.6.1.5.5.7.3.1. Officially described as "TLS WWW server authentication", though regularly used for non-WWW TLS.`, - }, - "time_stamping": { - Type: schema.TypeBool, - Computed: true, - Description: `Corresponds to OID 1.3.6.1.5.5.7.3.8. Officially described as "Binding the hash of an object to a time".`, - }, - }, - }, - }, - "unknown_extended_key_usages": { - Type: schema.TypeList, - Computed: true, - Description: `An ObjectId specifies an object identifier (OID). These provide context and describe types in ASN.1 messages.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "obect_id": { - Type: schema.TypeList, - Computed: true, - Description: `Required. Describes how some of the technical fields in a certificate should be populated.`, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "object_id_path": { - Type: schema.TypeList, - Computed: true, - Description: `An ObjectId specifies an object identifier (OID). These provide context and describe types in ASN.1 messages.`, - Elem: &schema.Schema{ - Type: schema.TypeInt, - }, - }, - }, - }, - }, - }, - }, - }, - }, - }, - }, - }, - }, - }, "crl_distribution_points": { Type: schema.TypeList, Computed: true, @@ -1351,6 +1214,12 @@ leading period (like '.example.com')`, Description: `The time that this resource was created on the server. This is in RFC3339 text format.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "issuer_certificate_authority": { Type: schema.TypeString, Computed: true, @@ -1369,15 +1238,6 @@ This is in RFC3339 text format.`, Type: schema.TypeString, }, }, - "pem_certificates": { - Type: schema.TypeList, - Computed: true, - Deprecated: "`pem_certificates` is deprecated and will be removed in a future major release. Use `pem_certificate_chain` instead.", - Description: `Required. Expected to be in leaf-to-root order according to RFC 5246.`, - Elem: &schema.Schema{ - Type: schema.TypeString, - }, - }, "revocation_details": { Type: schema.TypeList, Computed: true, @@ -1398,6 +1258,13 @@ considered revoked if and only if this field is present.`, }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -1435,12 +1302,6 @@ func resourcePrivatecaCertificateCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("certificate_template"); !tpgresource.IsEmptyValue(reflect.ValueOf(certificateTemplateProp)) && (ok || !reflect.DeepEqual(v, certificateTemplateProp)) { obj["certificateTemplate"] = certificateTemplateProp } - labelsProp, err := expandPrivatecaCertificateLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } pemCsrProp, err := expandPrivatecaCertificatePemCsr(d.Get("pem_csr"), d, config) if err != nil { return err @@ -1453,6 +1314,12 @@ func resourcePrivatecaCertificateCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("config"); !tpgresource.IsEmptyValue(reflect.ValueOf(configProp)) && (ok || !reflect.DeepEqual(v, configProp)) { obj["config"] = configProp } + labelsProp, err := expandPrivatecaCertificateEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{PrivatecaBasePath}}projects/{{project}}/locations/{{location}}/caPools/{{pool}}/certificates?certificateId={{name}}") if err != nil { @@ -1563,9 +1430,6 @@ func resourcePrivatecaCertificateRead(d *schema.ResourceData, meta interface{}) if err := d.Set("pem_certificate_chain", flattenPrivatecaCertificatePemCertificateChain(res["pemCertificateChain"], d, config)); err != nil { return fmt.Errorf("Error reading Certificate: %s", err) } - if err := d.Set("pem_certificates", flattenPrivatecaCertificatePemCertificates(res["pemCertificates"], d, config)); err != nil { - return fmt.Errorf("Error reading Certificate: %s", err) - } if err := d.Set("create_time", flattenPrivatecaCertificateCreateTime(res["createTime"], d, config)); err != nil { return fmt.Errorf("Error reading Certificate: %s", err) } @@ -1584,6 +1448,12 @@ func resourcePrivatecaCertificateRead(d *schema.ResourceData, meta interface{}) if err := d.Set("config", flattenPrivatecaCertificateConfig(res["config"], d, config)); err != nil { return fmt.Errorf("Error reading Certificate: %s", err) } + if err := d.Set("terraform_labels", flattenPrivatecaCertificateTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Certificate: %s", err) + } + if err := d.Set("effective_labels", flattenPrivatecaCertificateEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Certificate: %s", err) + } return nil } @@ -1604,10 +1474,10 @@ func resourcePrivatecaCertificateUpdate(d *schema.ResourceData, meta interface{} billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandPrivatecaCertificateLabels(d.Get("labels"), d, config) + labelsProp, err := expandPrivatecaCertificateEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -1619,7 +1489,7 @@ func resourcePrivatecaCertificateUpdate(d *schema.ResourceData, meta interface{} log.Printf("[DEBUG] Updating Certificate %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -1701,9 +1571,9 @@ func resourcePrivatecaCertificateDelete(d *schema.ResourceData, meta interface{} func resourcePrivatecaCertificateImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/caPools/(?P[^/]+)/certificates/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/caPools/(?P[^/]+)/certificates/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1766,8 +1636,6 @@ func flattenPrivatecaCertificateCertificateDescription(v interface{}, d *schema. flattenPrivatecaCertificateCertificateDescriptionSubjectDescription(original["subjectDescription"], d, config) transformed["x509_description"] = flattenPrivatecaCertificateCertificateDescriptionX509Description(original["x509Description"], d, config) - transformed["config_values"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValues(original["configValues"], d, config) transformed["public_key"] = flattenPrivatecaCertificateCertificateDescriptionPublicKey(original["publicKey"], d, config) transformed["subject_key_id"] = @@ -2308,196 +2176,6 @@ func flattenPrivatecaCertificateCertificateDescriptionX509DescriptionNameConstra return v } -func flattenPrivatecaCertificateCertificateDescriptionConfigValues(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["key_usage"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsage(original["keyUsage"], d, config) - return []interface{}{transformed} -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["base_key_usage"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsage(original["baseKeyUsage"], d, config) - transformed["extended_key_usage"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsage(original["extendedKeyUsage"], d, config) - transformed["unknown_extended_key_usages"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageUnknownExtendedKeyUsages(original["unknownExtendedKeyUsages"], d, config) - return []interface{}{transformed} -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["key_usage_options"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptions(original["keyUsageOptions"], d, config) - return []interface{}{transformed} -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptions(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["digital_signature"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsDigitalSignature(original["digitalSignature"], d, config) - transformed["content_commitment"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsContentCommitment(original["contentCommitment"], d, config) - transformed["key_encipherment"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsKeyEncipherment(original["keyEncipherment"], d, config) - transformed["data_encipherment"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsDataEncipherment(original["dataEncipherment"], d, config) - transformed["key_agreement"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsKeyAgreement(original["keyAgreement"], d, config) - transformed["cert_sign"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsCertSign(original["certSign"], d, config) - transformed["crl_sign"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsCrlSign(original["crlSign"], d, config) - transformed["encipher_only"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsEncipherOnly(original["encipherOnly"], d, config) - transformed["decipher_only"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsDecipherOnly(original["decipherOnly"], d, config) - return []interface{}{transformed} -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsDigitalSignature(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsContentCommitment(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsKeyEncipherment(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsDataEncipherment(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsKeyAgreement(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsCertSign(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsCrlSign(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsEncipherOnly(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageBaseKeyUsageKeyUsageOptionsDecipherOnly(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsage(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["server_auth"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageServerAuth(original["serverAuth"], d, config) - transformed["client_auth"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageClientAuth(original["clientAuth"], d, config) - transformed["code_signing"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageCodeSigning(original["codeSigning"], d, config) - transformed["email_protection"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageEmailProtection(original["emailProtection"], d, config) - transformed["time_stamping"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageTimeStamping(original["timeStamping"], d, config) - transformed["ocsp_signing"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageOcspSigning(original["ocspSigning"], d, config) - return []interface{}{transformed} -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageServerAuth(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageClientAuth(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageCodeSigning(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageEmailProtection(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageTimeStamping(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageExtendedKeyUsageOcspSigning(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageUnknownExtendedKeyUsages(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return v - } - l := v.([]interface{}) - transformed := make([]interface{}, 0, len(l)) - for _, raw := range l { - original := raw.(map[string]interface{}) - if len(original) < 1 { - // Do not include empty json objects coming back from the api - continue - } - transformed = append(transformed, map[string]interface{}{ - "obect_id": flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageUnknownExtendedKeyUsagesObectId(original["obectId"], d, config), - }) - } - return transformed -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageUnknownExtendedKeyUsagesObectId(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - if v == nil { - return nil - } - original := v.(map[string]interface{}) - if len(original) == 0 { - return nil - } - transformed := make(map[string]interface{}) - transformed["object_id_path"] = - flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageUnknownExtendedKeyUsagesObectIdObjectIdPath(original["objectIdPath"], d, config) - return []interface{}{transformed} -} -func flattenPrivatecaCertificateCertificateDescriptionConfigValuesKeyUsageUnknownExtendedKeyUsagesObectIdObjectIdPath(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - func flattenPrivatecaCertificateCertificateDescriptionPublicKey(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -2584,10 +2262,6 @@ func flattenPrivatecaCertificatePemCertificateChain(v interface{}, d *schema.Res return v } -func flattenPrivatecaCertificatePemCertificates(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v -} - func flattenPrivatecaCertificateCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -2601,7 +2275,18 @@ func flattenPrivatecaCertificateCertificateTemplate(v interface{}, d *schema.Res } func flattenPrivatecaCertificateLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenPrivatecaCertificatePemCsr(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -2778,6 +2463,25 @@ func flattenPrivatecaCertificateConfigPublicKeyFormat(v interface{}, d *schema.R return v } +func flattenPrivatecaCertificateTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPrivatecaCertificateEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandPrivatecaCertificateLifetime(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -2786,17 +2490,6 @@ func expandPrivatecaCertificateCertificateTemplate(v interface{}, d tpgresource. return v, nil } -func expandPrivatecaCertificateLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandPrivatecaCertificatePemCsr(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -3102,3 +2795,14 @@ func expandPrivatecaCertificateConfigPublicKeyKey(v interface{}, d tpgresource.T func expandPrivatecaCertificateConfigPublicKeyFormat(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandPrivatecaCertificateEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/privateca/resource_privateca_certificate_authority.go b/google/services/privateca/resource_privateca_certificate_authority.go index 451e7b2e52b..c37dd0a0ac8 100644 --- a/google/services/privateca/resource_privateca_certificate_authority.go +++ b/google/services/privateca/resource_privateca_certificate_authority.go @@ -72,6 +72,8 @@ func ResourcePrivatecaCertificateAuthority() *schema.Resource { CustomizeDiff: customdiff.All( resourcePrivateCaCACustomDiff, + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -657,7 +659,11 @@ Use with care. Defaults to 'false'.`, Description: `Labels with user-defined metadata. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": -"1.3kg", "count": "3" }.`, +"1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "lifetime": { @@ -767,6 +773,12 @@ CAs that have been activated.`, A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, @@ -789,6 +801,13 @@ CertificateAuthority's certificate.`, Computed: true, Description: `The State for this CertificateAuthority.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -864,10 +883,10 @@ func resourcePrivatecaCertificateAuthorityCreate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("gcs_bucket"); !tpgresource.IsEmptyValue(reflect.ValueOf(gcsBucketProp)) && (ok || !reflect.DeepEqual(v, gcsBucketProp)) { obj["gcsBucket"] = gcsBucketProp } - labelsProp, err := expandPrivatecaCertificateAuthorityLabels(d.Get("labels"), d, config) + labelsProp, err := expandPrivatecaCertificateAuthorityEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -1071,6 +1090,12 @@ func resourcePrivatecaCertificateAuthorityRead(d *schema.ResourceData, meta inte if err := d.Set("labels", flattenPrivatecaCertificateAuthorityLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading CertificateAuthority: %s", err) } + if err := d.Set("terraform_labels", flattenPrivatecaCertificateAuthorityTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateAuthority: %s", err) + } + if err := d.Set("effective_labels", flattenPrivatecaCertificateAuthorityEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading CertificateAuthority: %s", err) + } return nil } @@ -1097,10 +1122,10 @@ func resourcePrivatecaCertificateAuthorityUpdate(d *schema.ResourceData, meta in } else if v, ok := d.GetOkExists("subordinate_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, subordinateConfigProp)) { obj["subordinateConfig"] = subordinateConfigProp } - labelsProp, err := expandPrivatecaCertificateAuthorityLabels(d.Get("labels"), d, config) + labelsProp, err := expandPrivatecaCertificateAuthorityEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -1116,7 +1141,7 @@ func resourcePrivatecaCertificateAuthorityUpdate(d *schema.ResourceData, meta in updateMask = append(updateMask, "subordinateConfig") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -1288,9 +1313,9 @@ func resourcePrivatecaCertificateAuthorityDelete(d *schema.ResourceData, meta in func resourcePrivatecaCertificateAuthorityImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/caPools/(?P[^/]+)/certificateAuthorities/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/caPools/(?P[^/]+)/certificateAuthorities/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1573,6 +1598,36 @@ func flattenPrivatecaCertificateAuthorityUpdateTime(v interface{}, d *schema.Res } func flattenPrivatecaCertificateAuthorityLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPrivatecaCertificateAuthorityTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPrivatecaCertificateAuthorityEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -1936,7 +1991,7 @@ func expandPrivatecaCertificateAuthorityGcsBucket(v interface{}, d tpgresource.T return v, nil } -func expandPrivatecaCertificateAuthorityLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandPrivatecaCertificateAuthorityEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/privateca/resource_privateca_certificate_authority_generated_test.go b/google/services/privateca/resource_privateca_certificate_authority_generated_test.go index ea96c5d5242..65246957016 100644 --- a/google/services/privateca/resource_privateca_certificate_authority_generated_test.go +++ b/google/services/privateca/resource_privateca_certificate_authority_generated_test.go @@ -52,7 +52,7 @@ func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityBasicExam ResourceName: "google_privateca_certificate_authority.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection"}, + ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection", "labels", "terraform_labels"}, }, }, }) @@ -134,7 +134,7 @@ func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthoritySubordina ResourceName: "google_privateca_certificate_authority.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection"}, + ImportStateVerifyIgnore: []string{"pem_ca_certificate", "ignore_active_certificates_on_deletion", "skip_grace_period", "location", "certificate_authority_id", "pool", "deletion_protection", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/privateca/resource_privateca_certificate_authority_test.go b/google/services/privateca/resource_privateca_certificate_authority_test.go index 1ff09da3405..de90fa7a245 100644 --- a/google/services/privateca/resource_privateca_certificate_authority_test.go +++ b/google/services/privateca/resource_privateca_certificate_authority_test.go @@ -36,7 +36,7 @@ func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityUpdate(t ResourceName: "google_privateca_certificate_authority.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ignore_active_certificates_on_deletion", "location", "certificate_authority_id", "pool", "deletion_protection", "skip_grace_period"}, + ImportStateVerifyIgnore: []string{"ignore_active_certificates_on_deletion", "location", "certificate_authority_id", "pool", "deletion_protection", "skip_grace_period", "labels", "terraform_labels"}, }, { Config: testAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityEnd(context), @@ -45,7 +45,7 @@ func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityUpdate(t ResourceName: "google_privateca_certificate_authority.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ignore_active_certificates_on_deletion", "location", "certificate_authority_id", "pool", "deletion_protection", "skip_grace_period"}, + ImportStateVerifyIgnore: []string{"ignore_active_certificates_on_deletion", "location", "certificate_authority_id", "pool", "deletion_protection", "skip_grace_period", "labels", "terraform_labels"}, }, { Config: testAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityBasicRoot(context), @@ -54,7 +54,7 @@ func TestAccPrivatecaCertificateAuthority_privatecaCertificateAuthorityUpdate(t ResourceName: "google_privateca_certificate_authority.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ignore_active_certificates_on_deletion", "location", "certificate_authority_id", "pool", "deletion_protection", "skip_grace_period"}, + ImportStateVerifyIgnore: []string{"ignore_active_certificates_on_deletion", "location", "certificate_authority_id", "pool", "deletion_protection", "skip_grace_period", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/privateca/resource_privateca_certificate_generated_test.go b/google/services/privateca/resource_privateca_certificate_generated_test.go index d7fb7e29150..9621916ea6b 100644 --- a/google/services/privateca/resource_privateca_certificate_generated_test.go +++ b/google/services/privateca/resource_privateca_certificate_generated_test.go @@ -51,7 +51,7 @@ func TestAccPrivatecaCertificate_privatecaCertificateConfigExample(t *testing.T) ResourceName: "google_privateca_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority"}, + ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority", "labels", "terraform_labels"}, }, }, }) @@ -182,7 +182,7 @@ func TestAccPrivatecaCertificate_privatecaCertificateWithTemplateExample(t *test ResourceName: "google_privateca_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority"}, + ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority", "labels", "terraform_labels"}, }, }, }) @@ -345,7 +345,7 @@ func TestAccPrivatecaCertificate_privatecaCertificateCsrExample(t *testing.T) { ResourceName: "google_privateca_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority"}, + ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority", "labels", "terraform_labels"}, }, }, }) @@ -432,7 +432,7 @@ func TestAccPrivatecaCertificate_privatecaCertificateNoAuthorityExample(t *testi ResourceName: "google_privateca_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority"}, + ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/privateca/resource_privateca_certificate_template.go b/google/services/privateca/resource_privateca_certificate_template.go index 72e7f714350..f98d8865ccb 100644 --- a/google/services/privateca/resource_privateca_certificate_template.go +++ b/google/services/privateca/resource_privateca_certificate_template.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourcePrivatecaCertificateTemplate() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "location": { @@ -72,6 +77,12 @@ func ResourcePrivatecaCertificateTemplate() *schema.Resource { Description: "Optional. A human-readable description of scenarios this template is intended for.", }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", + }, + "identity_constraints": { Type: schema.TypeList, Optional: true, @@ -80,13 +91,6 @@ func ResourcePrivatecaCertificateTemplate() *schema.Resource { Elem: PrivatecaCertificateTemplateIdentityConstraintsSchema(), }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: "Optional. Labels with user-defined metadata.", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "passthrough_extensions": { Type: schema.TypeList, Optional: true, @@ -118,6 +122,19 @@ func ResourcePrivatecaCertificateTemplate() *schema.Resource { Description: "Output only. The time at which this CertificateTemplate was created.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "Optional. Labels with user-defined metadata.\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, + "update_time": { Type: schema.TypeString, Computed: true, @@ -484,8 +501,8 @@ func resourcePrivatecaCertificateTemplateCreate(d *schema.ResourceData, meta int Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IdentityConstraints: expandPrivatecaCertificateTemplateIdentityConstraints(d.Get("identity_constraints")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), PassthroughExtensions: expandPrivatecaCertificateTemplatePassthroughExtensions(d.Get("passthrough_extensions")), PredefinedValues: expandPrivatecaCertificateTemplatePredefinedValues(d.Get("predefined_values")), Project: dcl.String(project), @@ -539,8 +556,8 @@ func resourcePrivatecaCertificateTemplateRead(d *schema.ResourceData, meta inter Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IdentityConstraints: expandPrivatecaCertificateTemplateIdentityConstraints(d.Get("identity_constraints")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), PassthroughExtensions: expandPrivatecaCertificateTemplatePassthroughExtensions(d.Get("passthrough_extensions")), PredefinedValues: expandPrivatecaCertificateTemplatePredefinedValues(d.Get("predefined_values")), Project: dcl.String(project), @@ -577,12 +594,12 @@ func resourcePrivatecaCertificateTemplateRead(d *schema.ResourceData, meta inter if err = d.Set("description", res.Description); err != nil { return fmt.Errorf("error setting description in state: %s", err) } + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) + } if err = d.Set("identity_constraints", flattenPrivatecaCertificateTemplateIdentityConstraints(res.IdentityConstraints)); err != nil { return fmt.Errorf("error setting identity_constraints in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) - } if err = d.Set("passthrough_extensions", flattenPrivatecaCertificateTemplatePassthroughExtensions(res.PassthroughExtensions)); err != nil { return fmt.Errorf("error setting passthrough_extensions in state: %s", err) } @@ -595,6 +612,12 @@ func resourcePrivatecaCertificateTemplateRead(d *schema.ResourceData, meta inter if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenPrivatecaCertificateTemplateLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } + if err = d.Set("terraform_labels", flattenPrivatecaCertificateTemplateTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } if err = d.Set("update_time", res.UpdateTime); err != nil { return fmt.Errorf("error setting update_time in state: %s", err) } @@ -612,8 +635,8 @@ func resourcePrivatecaCertificateTemplateUpdate(d *schema.ResourceData, meta int Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IdentityConstraints: expandPrivatecaCertificateTemplateIdentityConstraints(d.Get("identity_constraints")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), PassthroughExtensions: expandPrivatecaCertificateTemplatePassthroughExtensions(d.Get("passthrough_extensions")), PredefinedValues: expandPrivatecaCertificateTemplatePredefinedValues(d.Get("predefined_values")), Project: dcl.String(project), @@ -662,8 +685,8 @@ func resourcePrivatecaCertificateTemplateDelete(d *schema.ResourceData, meta int Location: dcl.String(d.Get("location").(string)), Name: dcl.String(d.Get("name").(string)), Description: dcl.String(d.Get("description").(string)), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IdentityConstraints: expandPrivatecaCertificateTemplateIdentityConstraints(d.Get("identity_constraints")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), PassthroughExtensions: expandPrivatecaCertificateTemplatePassthroughExtensions(d.Get("passthrough_extensions")), PredefinedValues: expandPrivatecaCertificateTemplatePredefinedValues(d.Get("predefined_values")), Project: dcl.String(project), @@ -1224,6 +1247,37 @@ func flattenPrivatecaCertificateTemplatePredefinedValuesPolicyIds(obj *privateca return transformed } + +func flattenPrivatecaCertificateTemplateLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenPrivatecaCertificateTemplateTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + func flattenPrivatecaCertificateTemplatePassthroughExtensionsKnownExtensionsArray(obj []privateca.CertificateTemplatePassthroughExtensionsKnownExtensionsEnum) interface{} { if obj == nil { return nil diff --git a/google/services/privateca/resource_privateca_certificate_template_generated_test.go b/google/services/privateca/resource_privateca_certificate_template_generated_test.go index b0f77ad2588..cd62c899210 100644 --- a/google/services/privateca/resource_privateca_certificate_template_generated_test.go +++ b/google/services/privateca/resource_privateca_certificate_template_generated_test.go @@ -54,7 +54,7 @@ func TestAccPrivatecaCertificateTemplate_BasicCertificateTemplate(t *testing.T) ResourceName: "google_privateca_certificate_template.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"predefined_values.0.key_usage.0.extended_key_usage"}, + ImportStateVerifyIgnore: []string{"predefined_values.0.key_usage.0.extended_key_usage", "labels", "terraform_labels"}, }, { Config: testAccPrivatecaCertificateTemplate_BasicCertificateTemplateUpdate0(context), @@ -63,7 +63,7 @@ func TestAccPrivatecaCertificateTemplate_BasicCertificateTemplate(t *testing.T) ResourceName: "google_privateca_certificate_template.primary", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"predefined_values.0.key_usage.0.extended_key_usage"}, + ImportStateVerifyIgnore: []string{"predefined_values.0.key_usage.0.extended_key_usage", "labels", "terraform_labels"}, }, }, }) @@ -88,10 +88,6 @@ resource "google_privateca_certificate_template" "primary" { } } - labels = { - label-two = "value-two" - } - passthrough_extensions { additional_extensions { object_id_path = [1, 6] @@ -150,6 +146,10 @@ resource "google_privateca_certificate_template" "primary" { } project = "%{project_name}" + + labels = { + label-two = "value-two" + } } @@ -175,10 +175,6 @@ resource "google_privateca_certificate_template" "primary" { } } - labels = { - label-one = "value-one" - } - passthrough_extensions { additional_extensions { object_id_path = [1, 7] @@ -237,6 +233,10 @@ resource "google_privateca_certificate_template" "primary" { } project = "%{project_name}" + + labels = { + label-one = "value-one" + } } diff --git a/google/services/privateca/resource_privateca_certificate_test.go b/google/services/privateca/resource_privateca_certificate_test.go index 96f14ce3a15..8d46388be6e 100644 --- a/google/services/privateca/resource_privateca_certificate_test.go +++ b/google/services/privateca/resource_privateca_certificate_test.go @@ -38,7 +38,7 @@ func TestAccPrivatecaCertificate_privatecaCertificateUpdate(t *testing.T) { ResourceName: "google_privateca_certificate.default", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority"}, + ImportStateVerifyIgnore: []string{"pool", "name", "location", "certificate_authority", "labels", "terraform_labels"}, }, { Config: testAccPrivatecaCertificate_privatecaCertificateStart(context), diff --git a/google/services/publicca/resource_public_ca_external_account_key.go b/google/services/publicca/resource_public_ca_external_account_key.go index e67897fa13a..4b3c7037124 100644 --- a/google/services/publicca/resource_public_ca_external_account_key.go +++ b/google/services/publicca/resource_public_ca_external_account_key.go @@ -22,6 +22,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -39,6 +40,10 @@ func ResourcePublicCAExternalAccountKey() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "location": { Type: schema.TypeString, diff --git a/google/services/pubsub/data_source_pubsub_subscription.go b/google/services/pubsub/data_source_pubsub_subscription.go index 6896baa2245..603c3d02b9e 100644 --- a/google/services/pubsub/data_source_pubsub_subscription.go +++ b/google/services/pubsub/data_source_pubsub_subscription.go @@ -30,5 +30,17 @@ func dataSourceGooglePubsubSubscriptionRead(d *schema.ResourceData, meta interfa return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourcePubsubSubscriptionRead(d, meta) + err = resourcePubsubSubscriptionRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/pubsub/data_source_pubsub_subscription_test.go b/google/services/pubsub/data_source_pubsub_subscription_test.go index 38159e8c1e7..08c043e52c2 100644 --- a/google/services/pubsub/data_source_pubsub_subscription_test.go +++ b/google/services/pubsub/data_source_pubsub_subscription_test.go @@ -62,6 +62,9 @@ resource "google_pubsub_topic" "foo" { resource "google_pubsub_subscription" "foo" { name = "tf-test-pubsub-subscription-%{random_suffix}" topic = google_pubsub_topic.foo.name + labels = { + my-label = "my-label-value" + } } data "google_pubsub_subscription" "foo" { diff --git a/google/services/pubsub/data_source_pubsub_topic.go b/google/services/pubsub/data_source_pubsub_topic.go index 7cbd0de400f..a49474fdcf1 100644 --- a/google/services/pubsub/data_source_pubsub_topic.go +++ b/google/services/pubsub/data_source_pubsub_topic.go @@ -30,5 +30,17 @@ func dataSourceGooglePubsubTopicRead(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourcePubsubTopicRead(d, meta) + err = resourcePubsubTopicRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/pubsub/data_source_pubsub_topic_test.go b/google/services/pubsub/data_source_pubsub_topic_test.go index 00b84659b03..bdc236881f7 100644 --- a/google/services/pubsub/data_source_pubsub_topic_test.go +++ b/google/services/pubsub/data_source_pubsub_topic_test.go @@ -57,6 +57,9 @@ func testAccDataSourceGooglePubsubTopic_basic(context map[string]interface{}) st return acctest.Nprintf(` resource "google_pubsub_topic" "foo" { name = "tf-test-pubsub-%{random_suffix}" + labels = { + my-label = "my-label-value" + } } data "google_pubsub_topic" "foo" { diff --git a/google/services/pubsub/resource_pubsub_schema.go b/google/services/pubsub/resource_pubsub_schema.go index 6673221e564..f010f4250d2 100644 --- a/google/services/pubsub/resource_pubsub_schema.go +++ b/google/services/pubsub/resource_pubsub_schema.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourcePubsubSchema() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -373,9 +378,9 @@ func resourcePubsubSchemaDelete(d *schema.ResourceData, meta interface{}) error func resourcePubsubSchemaImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/schemas/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/schemas/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/pubsub/resource_pubsub_subscription.go b/google/services/pubsub/resource_pubsub_subscription.go index b3e012e91ef..513d523f97e 100644 --- a/google/services/pubsub/resource_pubsub_subscription.go +++ b/google/services/pubsub/resource_pubsub_subscription.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -60,6 +61,11 @@ func ResourcePubsubSubscription() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -300,10 +306,14 @@ by their attributes. The maximum length of a filter is 256 bytes. After creating you can't modify the filter.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to this Subscription.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this Subscription. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "message_retention_duration": { Type: schema.TypeString, @@ -452,6 +462,19 @@ A duration in seconds with up to nine fractional digits, terminated by 's'. Exam }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -483,12 +506,6 @@ func resourcePubsubSubscriptionCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("topic"); !tpgresource.IsEmptyValue(reflect.ValueOf(topicProp)) && (ok || !reflect.DeepEqual(v, topicProp)) { obj["topic"] = topicProp } - labelsProp, err := expandPubsubSubscriptionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } bigqueryConfigProp, err := expandPubsubSubscriptionBigqueryConfig(d.Get("bigquery_config"), d, config) if err != nil { return err @@ -561,6 +578,12 @@ func resourcePubsubSubscriptionCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("enable_exactly_once_delivery"); !tpgresource.IsEmptyValue(reflect.ValueOf(enableExactlyOnceDeliveryProp)) && (ok || !reflect.DeepEqual(v, enableExactlyOnceDeliveryProp)) { obj["enableExactlyOnceDelivery"] = enableExactlyOnceDeliveryProp } + labelsProp, err := expandPubsubSubscriptionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourcePubsubSubscriptionEncoder(d, meta, obj) if err != nil { @@ -742,6 +765,12 @@ func resourcePubsubSubscriptionRead(d *schema.ResourceData, meta interface{}) er if err := d.Set("enable_exactly_once_delivery", flattenPubsubSubscriptionEnableExactlyOnceDelivery(res["enableExactlyOnceDelivery"], d, config)); err != nil { return fmt.Errorf("Error reading Subscription: %s", err) } + if err := d.Set("terraform_labels", flattenPubsubSubscriptionTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Subscription: %s", err) + } + if err := d.Set("effective_labels", flattenPubsubSubscriptionEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Subscription: %s", err) + } return nil } @@ -762,12 +791,6 @@ func resourcePubsubSubscriptionUpdate(d *schema.ResourceData, meta interface{}) billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandPubsubSubscriptionLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } bigqueryConfigProp, err := expandPubsubSubscriptionBigqueryConfig(d.Get("bigquery_config"), d, config) if err != nil { return err @@ -828,6 +851,12 @@ func resourcePubsubSubscriptionUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("enable_exactly_once_delivery"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, enableExactlyOnceDeliveryProp)) { obj["enableExactlyOnceDelivery"] = enableExactlyOnceDeliveryProp } + labelsProp, err := expandPubsubSubscriptionEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourcePubsubSubscriptionUpdateEncoder(d, meta, obj) if err != nil { @@ -842,10 +871,6 @@ func resourcePubsubSubscriptionUpdate(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Updating Subscription %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("bigquery_config") { updateMask = append(updateMask, "bigqueryConfig") } @@ -885,6 +910,10 @@ func resourcePubsubSubscriptionUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("enable_exactly_once_delivery") { updateMask = append(updateMask, "enableExactlyOnceDelivery") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -964,9 +993,9 @@ func resourcePubsubSubscriptionDelete(d *schema.ResourceData, meta interface{}) func resourcePubsubSubscriptionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/subscriptions/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/subscriptions/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -996,7 +1025,18 @@ func flattenPubsubSubscriptionTopic(v interface{}, d *schema.ResourceData, confi } func flattenPubsubSubscriptionLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenPubsubSubscriptionBigqueryConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1290,6 +1330,25 @@ func flattenPubsubSubscriptionEnableExactlyOnceDelivery(v interface{}, d *schema return v } +func flattenPubsubSubscriptionTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPubsubSubscriptionEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandPubsubSubscriptionName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.ReplaceVars(d, config, "projects/{{project}}/subscriptions/{{name}}") } @@ -1316,17 +1375,6 @@ func expandPubsubSubscriptionTopic(v interface{}, d tpgresource.TerraformResourc } } -func expandPubsubSubscriptionLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandPubsubSubscriptionBigqueryConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -1723,6 +1771,17 @@ func expandPubsubSubscriptionEnableExactlyOnceDelivery(v interface{}, d tpgresou return v, nil } +func expandPubsubSubscriptionEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourcePubsubSubscriptionEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { delete(obj, "name") return obj, nil diff --git a/google/services/pubsub/resource_pubsub_subscription_generated_test.go b/google/services/pubsub/resource_pubsub_subscription_generated_test.go index c93d4aa5273..8b7a2d148cf 100644 --- a/google/services/pubsub/resource_pubsub_subscription_generated_test.go +++ b/google/services/pubsub/resource_pubsub_subscription_generated_test.go @@ -49,7 +49,7 @@ func TestAccPubsubSubscription_pubsubSubscriptionPushExample(t *testing.T) { ResourceName: "google_pubsub_subscription.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"topic"}, + ImportStateVerifyIgnore: []string{"topic", "labels", "terraform_labels"}, }, }, }) @@ -101,7 +101,7 @@ func TestAccPubsubSubscription_pubsubSubscriptionPullExample(t *testing.T) { ResourceName: "google_pubsub_subscription.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"topic"}, + ImportStateVerifyIgnore: []string{"topic", "labels", "terraform_labels"}, }, }, }) @@ -158,7 +158,7 @@ func TestAccPubsubSubscription_pubsubSubscriptionDeadLetterExample(t *testing.T) ResourceName: "google_pubsub_subscription.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"topic"}, + ImportStateVerifyIgnore: []string{"topic", "labels", "terraform_labels"}, }, }, }) @@ -205,7 +205,7 @@ func TestAccPubsubSubscription_pubsubSubscriptionPushBqExample(t *testing.T) { ResourceName: "google_pubsub_subscription.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"topic"}, + ImportStateVerifyIgnore: []string{"topic", "labels", "terraform_labels"}, }, }, }) @@ -285,7 +285,7 @@ func TestAccPubsubSubscription_pubsubSubscriptionPushCloudstorageExample(t *test ResourceName: "google_pubsub_subscription.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"topic"}, + ImportStateVerifyIgnore: []string{"topic", "labels", "terraform_labels"}, }, }, }) @@ -352,7 +352,7 @@ func TestAccPubsubSubscription_pubsubSubscriptionPushCloudstorageAvroExample(t * ResourceName: "google_pubsub_subscription.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"topic"}, + ImportStateVerifyIgnore: []string{"topic", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/pubsub/resource_pubsub_subscription_test.go b/google/services/pubsub/resource_pubsub_subscription_test.go index a6c96f1deff..bf74e74a252 100644 --- a/google/services/pubsub/resource_pubsub_subscription_test.go +++ b/google/services/pubsub/resource_pubsub_subscription_test.go @@ -53,10 +53,11 @@ func TestAccPubsubSubscription_basic(t *testing.T) { Config: testAccPubsubSubscription_basic(topic, subscription, "bar", 20, false), }, { - ResourceName: "google_pubsub_subscription.foo", - ImportStateId: subscription, - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_subscription.foo", + ImportStateId: subscription, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -77,19 +78,21 @@ func TestAccPubsubSubscription_update(t *testing.T) { Config: testAccPubsubSubscription_basic(topic, subscriptionShort, "bar", 20, false), }, { - ResourceName: "google_pubsub_subscription.foo", - ImportStateId: subscriptionShort, - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_subscription.foo", + ImportStateId: subscriptionShort, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccPubsubSubscription_basic(topic, subscriptionShort, "baz", 30, true), }, { - ResourceName: "google_pubsub_subscription.foo", - ImportStateId: subscriptionShort, - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_subscription.foo", + ImportStateId: subscriptionShort, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/pubsub/resource_pubsub_topic.go b/google/services/pubsub/resource_pubsub_topic.go index c3796fcfbbb..8b6dd482975 100644 --- a/google/services/pubsub/resource_pubsub_topic.go +++ b/google/services/pubsub/resource_pubsub_topic.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,11 @@ func ResourcePubsubTopic() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -66,10 +72,14 @@ to messages published on this topic. Your project's PubSub service account The expected format is 'projects/*/locations/*/keyRings/*/cryptoKeys/*'`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to this Topic.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this Topic. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "message_retention_duration": { Type: schema.TypeString, @@ -134,6 +144,19 @@ if the schema has been deleted.`, }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -165,12 +188,6 @@ func resourcePubsubTopicCreate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("kms_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(kmsKeyNameProp)) && (ok || !reflect.DeepEqual(v, kmsKeyNameProp)) { obj["kmsKeyName"] = kmsKeyNameProp } - labelsProp, err := expandPubsubTopicLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } messageStoragePolicyProp, err := expandPubsubTopicMessageStoragePolicy(d.Get("message_storage_policy"), d, config) if err != nil { return err @@ -189,6 +206,12 @@ func resourcePubsubTopicCreate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("message_retention_duration"); !tpgresource.IsEmptyValue(reflect.ValueOf(messageRetentionDurationProp)) && (ok || !reflect.DeepEqual(v, messageRetentionDurationProp)) { obj["messageRetentionDuration"] = messageRetentionDurationProp } + labelsProp, err := expandPubsubTopicEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourcePubsubTopicEncoder(d, meta, obj) if err != nil { @@ -346,6 +369,12 @@ func resourcePubsubTopicRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("message_retention_duration", flattenPubsubTopicMessageRetentionDuration(res["messageRetentionDuration"], d, config)); err != nil { return fmt.Errorf("Error reading Topic: %s", err) } + if err := d.Set("terraform_labels", flattenPubsubTopicTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Topic: %s", err) + } + if err := d.Set("effective_labels", flattenPubsubTopicEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Topic: %s", err) + } return nil } @@ -372,12 +401,6 @@ func resourcePubsubTopicUpdate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("kms_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, kmsKeyNameProp)) { obj["kmsKeyName"] = kmsKeyNameProp } - labelsProp, err := expandPubsubTopicLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } messageStoragePolicyProp, err := expandPubsubTopicMessageStoragePolicy(d.Get("message_storage_policy"), d, config) if err != nil { return err @@ -396,6 +419,12 @@ func resourcePubsubTopicUpdate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("message_retention_duration"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, messageRetentionDurationProp)) { obj["messageRetentionDuration"] = messageRetentionDurationProp } + labelsProp, err := expandPubsubTopicEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourcePubsubTopicUpdateEncoder(d, meta, obj) if err != nil { @@ -414,10 +443,6 @@ func resourcePubsubTopicUpdate(d *schema.ResourceData, meta interface{}) error { updateMask = append(updateMask, "kmsKeyName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("message_storage_policy") { updateMask = append(updateMask, "messageStoragePolicy") } @@ -429,6 +454,10 @@ func resourcePubsubTopicUpdate(d *schema.ResourceData, meta interface{}) error { if d.HasChange("message_retention_duration") { updateMask = append(updateMask, "messageRetentionDuration") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -510,9 +539,9 @@ func resourcePubsubTopicDelete(d *schema.ResourceData, meta interface{}) error { func resourcePubsubTopicImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/topics/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/topics/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -539,7 +568,18 @@ func flattenPubsubTopicKmsKeyName(v interface{}, d *schema.ResourceData, config } func flattenPubsubTopicLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenPubsubTopicMessageStoragePolicy(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -586,6 +626,25 @@ func flattenPubsubTopicMessageRetentionDuration(v interface{}, d *schema.Resourc return v } +func flattenPubsubTopicTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenPubsubTopicEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandPubsubTopicName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return tpgresource.GetResourceNameFromSelfLink(v.(string)), nil } @@ -594,17 +653,6 @@ func expandPubsubTopicKmsKeyName(v interface{}, d tpgresource.TerraformResourceD return v, nil } -func expandPubsubTopicLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandPubsubTopicMessageStoragePolicy(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -666,6 +714,17 @@ func expandPubsubTopicMessageRetentionDuration(v interface{}, d tpgresource.Terr return v, nil } +func expandPubsubTopicEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourcePubsubTopicEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { delete(obj, "name") return obj, nil diff --git a/google/services/pubsub/resource_pubsub_topic_generated_test.go b/google/services/pubsub/resource_pubsub_topic_generated_test.go index 9ba5657aee2..ef71e8b2b79 100644 --- a/google/services/pubsub/resource_pubsub_topic_generated_test.go +++ b/google/services/pubsub/resource_pubsub_topic_generated_test.go @@ -47,9 +47,10 @@ func TestAccPubsubTopic_pubsubTopicBasicExample(t *testing.T) { Config: testAccPubsubTopic_pubsubTopicBasicExample(context), }, { - ResourceName: "google_pubsub_topic.example", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_topic.example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -85,9 +86,10 @@ func TestAccPubsubTopic_pubsubTopicGeoRestrictedExample(t *testing.T) { Config: testAccPubsubTopic_pubsubTopicGeoRestrictedExample(context), }, { - ResourceName: "google_pubsub_topic.example", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_topic.example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -124,9 +126,10 @@ func TestAccPubsubTopic_pubsubTopicSchemaSettingsExample(t *testing.T) { Config: testAccPubsubTopic_pubsubTopicSchemaSettingsExample(context), }, { - ResourceName: "google_pubsub_topic.example", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_topic.example", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/pubsub/resource_pubsub_topic_test.go b/google/services/pubsub/resource_pubsub_topic_test.go index a0ca061c8ba..b71353d1fee 100644 --- a/google/services/pubsub/resource_pubsub_topic_test.go +++ b/google/services/pubsub/resource_pubsub_topic_test.go @@ -24,19 +24,21 @@ func TestAccPubsubTopic_update(t *testing.T) { Config: testAccPubsubTopic_update(topic, "foo", "bar"), }, { - ResourceName: "google_pubsub_topic.foo", - ImportStateId: topic, - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_topic.foo", + ImportStateId: topic, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccPubsubTopic_updateWithRegion(topic, "wibble", "wobble", "us-central1"), }, { - ResourceName: "google_pubsub_topic.foo", - ImportStateId: topic, - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_pubsub_topic.foo", + ImportStateId: topic, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/pubsublite/resource_pubsub_lite_reservation.go b/google/services/pubsublite/resource_pubsub_lite_reservation.go index 88a7fe953b7..9348952642d 100644 --- a/google/services/pubsublite/resource_pubsub_lite_reservation.go +++ b/google/services/pubsublite/resource_pubsub_lite_reservation.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourcePubsubLiteReservation() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -297,10 +302,10 @@ func resourcePubsubLiteReservationDelete(d *schema.ResourceData, meta interface{ func resourcePubsubLiteReservationImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/reservations/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/reservations/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/pubsublite/resource_pubsub_lite_subscription.go b/google/services/pubsublite/resource_pubsub_lite_subscription.go index 1ca022bf4f9..238a78a5317 100644 --- a/google/services/pubsublite/resource_pubsub_lite_subscription.go +++ b/google/services/pubsublite/resource_pubsub_lite_subscription.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -49,6 +50,10 @@ func ResourcePubsubLiteSubscription() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -339,10 +344,10 @@ func resourcePubsubLiteSubscriptionDelete(d *schema.ResourceData, meta interface func resourcePubsubLiteSubscriptionImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/subscriptions/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/subscriptions/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/pubsublite/resource_pubsub_lite_topic.go b/google/services/pubsublite/resource_pubsub_lite_topic.go index 9f458166e43..4214040b543 100644 --- a/google/services/pubsublite/resource_pubsub_lite_topic.go +++ b/google/services/pubsublite/resource_pubsub_lite_topic.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourcePubsubLiteTopic() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -419,10 +424,10 @@ func resourcePubsubLiteTopicDelete(d *schema.ResourceData, meta interface{}) err func resourcePubsubLiteTopicImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/topics/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/topics/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key.go b/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key.go index 89d7e844716..2f948904e0b 100644 --- a/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key.go +++ b/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key.go @@ -24,6 +24,7 @@ import ( "log" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" dcl "github.com/GoogleCloudPlatform/declarative-resource-client-library/dcl" @@ -50,6 +51,10 @@ func ResourceRecaptchaEnterpriseKey() *schema.Resource { Update: schema.DefaultTimeout(20 * time.Minute), Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, + ), Schema: map[string]*schema.Schema{ "display_name": { @@ -67,6 +72,12 @@ func ResourceRecaptchaEnterpriseKey() *schema.Resource { ConflictsWith: []string{"web_settings", "ios_settings"}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.", + }, + "ios_settings": { Type: schema.TypeList, Optional: true, @@ -76,13 +87,6 @@ func ResourceRecaptchaEnterpriseKey() *schema.Resource { ConflictsWith: []string{"web_settings", "android_settings"}, }, - "labels": { - Type: schema.TypeMap, - Optional: true, - Description: "See [Creating and managing labels](https://cloud.google.com/recaptcha-enterprise/docs/labels).", - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "project": { Type: schema.TypeString, Computed: true, @@ -116,11 +120,24 @@ func ResourceRecaptchaEnterpriseKey() *schema.Resource { Description: "The timestamp corresponding to the creation of this Key.", }, + "labels": { + Type: schema.TypeMap, + Optional: true, + Description: "See [Creating and managing labels](https://cloud.google.com/recaptcha-enterprise/docs/labels).\n\n**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource.", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "name": { Type: schema.TypeString, Computed: true, Description: "The resource name for the Key in the format \"projects/{project}/keys/{key}\".", }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: "The combination of labels configured directly on the resource and default labels configured on the provider.", + }, }, } } @@ -233,8 +250,8 @@ func resourceRecaptchaEnterpriseKeyCreate(d *schema.ResourceData, meta interface obj := &recaptchaenterprise.Key{ DisplayName: dcl.String(d.Get("display_name").(string)), AndroidSettings: expandRecaptchaEnterpriseKeyAndroidSettings(d.Get("android_settings")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IosSettings: expandRecaptchaEnterpriseKeyIosSettings(d.Get("ios_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), TestingOptions: expandRecaptchaEnterpriseKeyTestingOptions(d.Get("testing_options")), WebSettings: expandRecaptchaEnterpriseKeyWebSettings(d.Get("web_settings")), @@ -298,8 +315,8 @@ func resourceRecaptchaEnterpriseKeyRead(d *schema.ResourceData, meta interface{} obj := &recaptchaenterprise.Key{ DisplayName: dcl.String(d.Get("display_name").(string)), AndroidSettings: expandRecaptchaEnterpriseKeyAndroidSettings(d.Get("android_settings")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IosSettings: expandRecaptchaEnterpriseKeyIosSettings(d.Get("ios_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), TestingOptions: expandRecaptchaEnterpriseKeyTestingOptions(d.Get("testing_options")), WebSettings: expandRecaptchaEnterpriseKeyWebSettings(d.Get("web_settings")), @@ -334,12 +351,12 @@ func resourceRecaptchaEnterpriseKeyRead(d *schema.ResourceData, meta interface{} if err = d.Set("android_settings", flattenRecaptchaEnterpriseKeyAndroidSettings(res.AndroidSettings)); err != nil { return fmt.Errorf("error setting android_settings in state: %s", err) } + if err = d.Set("effective_labels", res.Labels); err != nil { + return fmt.Errorf("error setting effective_labels in state: %s", err) + } if err = d.Set("ios_settings", flattenRecaptchaEnterpriseKeyIosSettings(res.IosSettings)); err != nil { return fmt.Errorf("error setting ios_settings in state: %s", err) } - if err = d.Set("labels", res.Labels); err != nil { - return fmt.Errorf("error setting labels in state: %s", err) - } if err = d.Set("project", res.Project); err != nil { return fmt.Errorf("error setting project in state: %s", err) } @@ -352,9 +369,15 @@ func resourceRecaptchaEnterpriseKeyRead(d *schema.ResourceData, meta interface{} if err = d.Set("create_time", res.CreateTime); err != nil { return fmt.Errorf("error setting create_time in state: %s", err) } + if err = d.Set("labels", flattenRecaptchaEnterpriseKeyLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting labels in state: %s", err) + } if err = d.Set("name", res.Name); err != nil { return fmt.Errorf("error setting name in state: %s", err) } + if err = d.Set("terraform_labels", flattenRecaptchaEnterpriseKeyTerraformLabels(res.Labels, d)); err != nil { + return fmt.Errorf("error setting terraform_labels in state: %s", err) + } return nil } @@ -368,8 +391,8 @@ func resourceRecaptchaEnterpriseKeyUpdate(d *schema.ResourceData, meta interface obj := &recaptchaenterprise.Key{ DisplayName: dcl.String(d.Get("display_name").(string)), AndroidSettings: expandRecaptchaEnterpriseKeyAndroidSettings(d.Get("android_settings")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IosSettings: expandRecaptchaEnterpriseKeyIosSettings(d.Get("ios_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), TestingOptions: expandRecaptchaEnterpriseKeyTestingOptions(d.Get("testing_options")), WebSettings: expandRecaptchaEnterpriseKeyWebSettings(d.Get("web_settings")), @@ -418,8 +441,8 @@ func resourceRecaptchaEnterpriseKeyDelete(d *schema.ResourceData, meta interface obj := &recaptchaenterprise.Key{ DisplayName: dcl.String(d.Get("display_name").(string)), AndroidSettings: expandRecaptchaEnterpriseKeyAndroidSettings(d.Get("android_settings")), + Labels: tpgresource.CheckStringMap(d.Get("effective_labels")), IosSettings: expandRecaptchaEnterpriseKeyIosSettings(d.Get("ios_settings")), - Labels: tpgresource.CheckStringMap(d.Get("labels")), Project: dcl.String(project), TestingOptions: expandRecaptchaEnterpriseKeyTestingOptions(d.Get("testing_options")), WebSettings: expandRecaptchaEnterpriseKeyWebSettings(d.Get("web_settings")), @@ -589,3 +612,33 @@ func flattenRecaptchaEnterpriseKeyWebSettings(obj *recaptchaenterprise.KeyWebSet return []interface{}{transformed} } + +func flattenRecaptchaEnterpriseKeyLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} + +func flattenRecaptchaEnterpriseKeyTerraformLabels(v map[string]string, d *schema.ResourceData) interface{} { + if v == nil { + return nil + } + + transformed := make(map[string]interface{}) + if l, ok := d.Get("terraform_labels").(map[string]interface{}); ok { + for k, _ := range l { + transformed[k] = v[k] + } + } + + return transformed +} diff --git a/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key_generated_test.go b/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key_generated_test.go index 23dafeb08ae..e09781f99d0 100644 --- a/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key_generated_test.go +++ b/google/services/recaptchaenterprise/resource_recaptcha_enterprise_key_generated_test.go @@ -50,17 +50,19 @@ func TestAccRecaptchaEnterpriseKey_AndroidKey(t *testing.T) { Config: testAccRecaptchaEnterpriseKey_AndroidKey(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccRecaptchaEnterpriseKey_AndroidKeyUpdate0(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -82,17 +84,19 @@ func TestAccRecaptchaEnterpriseKey_IosKey(t *testing.T) { Config: testAccRecaptchaEnterpriseKey_IosKey(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccRecaptchaEnterpriseKey_IosKeyUpdate0(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -114,9 +118,10 @@ func TestAccRecaptchaEnterpriseKey_MinimalKey(t *testing.T) { Config: testAccRecaptchaEnterpriseKey_MinimalKey(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -138,17 +143,19 @@ func TestAccRecaptchaEnterpriseKey_WebKey(t *testing.T) { Config: testAccRecaptchaEnterpriseKey_WebKey(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccRecaptchaEnterpriseKey_WebKeyUpdate0(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -170,17 +177,19 @@ func TestAccRecaptchaEnterpriseKey_WebScoreKey(t *testing.T) { Config: testAccRecaptchaEnterpriseKey_WebScoreKey(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccRecaptchaEnterpriseKey_WebScoreKeyUpdate0(context), }, { - ResourceName: "google_recaptcha_enterprise_key.primary", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_recaptcha_enterprise_key.primary", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) @@ -196,15 +205,15 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_package_names = [] } - labels = { - label-one = "value-one" - } - project = "%{project_name}" testing_options { testing_score = 0.8 } + + labels = { + label-one = "value-one" + } } @@ -221,15 +230,15 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_package_names = ["com.android.application"] } - labels = { - label-two = "value-two" - } - project = "%{project_name}" testing_options { testing_score = 0.8 } + + labels = { + label-two = "value-two" + } } @@ -246,15 +255,15 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_bundle_ids = [] } - labels = { - label-one = "value-one" - } - project = "%{project_name}" testing_options { testing_score = 1 } + + labels = { + label-one = "value-one" + } } @@ -271,15 +280,15 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_bundle_ids = ["com.companyname.appname"] } - labels = { - label-two = "value-two" - } - project = "%{project_name}" testing_options { testing_score = 1 } + + labels = { + label-two = "value-two" + } } @@ -290,13 +299,14 @@ func testAccRecaptchaEnterpriseKey_MinimalKey(context map[string]interface{}) st return acctest.Nprintf(` resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-one" - labels = {} project = "%{project_name}" web_settings { integration_type = "SCORE" allow_all_domains = true } + + labels = {} } @@ -307,12 +317,7 @@ func testAccRecaptchaEnterpriseKey_WebKey(context map[string]interface{}) string return acctest.Nprintf(` resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-one" - - labels = { - label-one = "value-one" - } - - project = "%{project_name}" + project = "%{project_name}" testing_options { testing_challenge = "NOCAPTCHA" @@ -325,6 +330,10 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_domains = [] challenge_security_preference = "USABILITY" } + + labels = { + label-one = "value-one" + } } @@ -335,12 +344,7 @@ func testAccRecaptchaEnterpriseKey_WebKeyUpdate0(context map[string]interface{}) return acctest.Nprintf(` resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-two" - - labels = { - label-two = "value-two" - } - - project = "%{project_name}" + project = "%{project_name}" testing_options { testing_challenge = "NOCAPTCHA" @@ -353,6 +357,10 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_domains = ["subdomain.example.com"] challenge_security_preference = "SECURITY" } + + labels = { + label-two = "value-two" + } } @@ -363,12 +371,7 @@ func testAccRecaptchaEnterpriseKey_WebScoreKey(context map[string]interface{}) s return acctest.Nprintf(` resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-one" - - labels = { - label-one = "value-one" - } - - project = "%{project_name}" + project = "%{project_name}" testing_options { testing_score = 0.5 @@ -380,6 +383,10 @@ resource "google_recaptcha_enterprise_key" "primary" { allow_amp_traffic = false allowed_domains = [] } + + labels = { + label-one = "value-one" + } } @@ -390,12 +397,7 @@ func testAccRecaptchaEnterpriseKey_WebScoreKeyUpdate0(context map[string]interfa return acctest.Nprintf(` resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-two" - - labels = { - label-two = "value-two" - } - - project = "%{project_name}" + project = "%{project_name}" testing_options { testing_score = 0.5 @@ -407,6 +409,10 @@ resource "google_recaptcha_enterprise_key" "primary" { allow_amp_traffic = true allowed_domains = ["subdomain.example.com"] } + + labels = { + label-two = "value-two" + } } diff --git a/google/services/redis/data_source_redis_instance.go b/google/services/redis/data_source_redis_instance.go index 00d57581358..dd73b7ae288 100644 --- a/google/services/redis/data_source_redis_instance.go +++ b/google/services/redis/data_source_redis_instance.go @@ -3,6 +3,8 @@ package redis import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" @@ -31,5 +33,17 @@ func dataSourceGoogleRedisInstanceRead(d *schema.ResourceData, meta interface{}) } d.SetId(id) - return resourceRedisInstanceRead(d, meta) + err = resourceRedisInstanceRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/redis/data_source_redis_instance_test.go b/google/services/redis/data_source_redis_instance_test.go index 62cbb77d557..1372633c4f2 100644 --- a/google/services/redis/data_source_redis_instance_test.go +++ b/google/services/redis/data_source_redis_instance_test.go @@ -32,6 +32,10 @@ func testAccRedisInstanceDatasourceConfig(suffix string) string { resource "google_redis_instance" "redis" { name = "redis-test-%s" memory_size_gb = 1 + + labels = { + my-label = "my-label-value" + } } data "google_redis_instance" "redis" { diff --git a/google/services/redis/resource_redis_instance.go b/google/services/redis/resource_redis_instance.go index 3ccbbfc3748..2221d431f6a 100644 --- a/google/services/redis/resource_redis_instance.go +++ b/google/services/redis/resource_redis_instance.go @@ -93,6 +93,8 @@ func ResourceRedisInstance() *schema.Resource { CustomizeDiff: customdiff.All( customdiff.ForceNewIfChange("redis_version", isRedisVersionDecreasing), + tpgresource.DefaultProviderProject, + tpgresource.SetLabelsDiff, ), Schema: map[string]*schema.Schema{ @@ -157,10 +159,13 @@ instance. If this is provided, CMEK is enabled.`, Description: `An arbitrary and optional user-provided name for the instance.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `Resource labels to represent user provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `Resource labels to represent user provided metadata. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location_id": { Type: schema.TypeString, @@ -426,6 +431,12 @@ For Basic Tier instances, this will always be the same as the instances, this can be either [locationId] or [alternativeLocationId] and can change after a failover event.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "host": { Type: schema.TypeString, Computed: true, @@ -542,6 +553,13 @@ Write requests should target 'port'.`, }, }, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "auth_string": { Type: schema.TypeString, Description: "AUTH String set on the instance. This field will only be populated if auth_enabled is true.", @@ -597,12 +615,6 @@ func resourceRedisInstanceCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandRedisInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } redisConfigsProp, err := expandRedisInstanceRedisConfigs(d.Get("redis_configs"), d, config) if err != nil { return err @@ -687,6 +699,12 @@ func resourceRedisInstanceCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("customer_managed_key"); !tpgresource.IsEmptyValue(reflect.ValueOf(customerManagedKeyProp)) && (ok || !reflect.DeepEqual(v, customerManagedKeyProp)) { obj["customerManagedKey"] = customerManagedKeyProp } + labelsProp, err := expandRedisInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceRedisInstanceEncoder(d, meta, obj) if err != nil { @@ -894,6 +912,12 @@ func resourceRedisInstanceRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("customer_managed_key", flattenRedisInstanceCustomerManagedKey(res["customerManagedKey"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenRedisInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenRedisInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -926,12 +950,6 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandRedisInstanceLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } redisConfigsProp, err := expandRedisInstanceRedisConfigs(d.Get("redis_configs"), d, config) if err != nil { return err @@ -974,6 +992,12 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("secondary_ip_range"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, secondaryIpRangeProp)) { obj["secondaryIpRange"] = secondaryIpRangeProp } + labelsProp, err := expandRedisInstanceEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceRedisInstanceEncoder(d, meta, obj) if err != nil { @@ -996,10 +1020,6 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("redis_configs") { updateMask = append(updateMask, "redisConfigs") } @@ -1027,6 +1047,10 @@ func resourceRedisInstanceUpdate(d *schema.ResourceData, meta interface{}) error if d.HasChange("secondary_ip_range") { updateMask = append(updateMask, "secondaryIpRange") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -1171,10 +1195,10 @@ func resourceRedisInstanceDelete(d *schema.ResourceData, meta interface{}) error func resourceRedisInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -1222,7 +1246,18 @@ func flattenRedisInstanceHost(v interface{}, d *schema.ResourceData, config *tra } func flattenRedisInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenRedisInstanceRedisConfigs(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -1616,6 +1651,25 @@ func flattenRedisInstanceCustomerManagedKey(v interface{}, d *schema.ResourceDat return v } +func flattenRedisInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenRedisInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandRedisInstanceAlternativeLocationId(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1640,17 +1694,6 @@ func expandRedisInstanceDisplayName(v interface{}, d tpgresource.TerraformResour return v, nil } -func expandRedisInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandRedisInstanceRedisConfigs(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil @@ -1919,6 +1962,17 @@ func expandRedisInstanceCustomerManagedKey(v interface{}, d tpgresource.Terrafor return v, nil } +func expandRedisInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceRedisInstanceEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { config := meta.(*transport_tpg.Config) region, err := tpgresource.GetRegionFromSchema("region", "location_id", d, config) diff --git a/google/services/redis/resource_redis_instance_generated_test.go b/google/services/redis/resource_redis_instance_generated_test.go index 0a5a6b434a0..95e82a1c792 100644 --- a/google/services/redis/resource_redis_instance_generated_test.go +++ b/google/services/redis/resource_redis_instance_generated_test.go @@ -49,7 +49,7 @@ func TestAccRedisInstance_redisInstanceBasicExample(t *testing.T) { ResourceName: "google_redis_instance.cache", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"reserved_ip_range", "region"}, + ImportStateVerifyIgnore: []string{"reserved_ip_range", "region", "labels", "terraform_labels"}, }, }, }) @@ -84,7 +84,7 @@ func TestAccRedisInstance_redisInstanceFullExample(t *testing.T) { ResourceName: "google_redis_instance.cache", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"reserved_ip_range", "region"}, + ImportStateVerifyIgnore: []string{"reserved_ip_range", "region", "labels", "terraform_labels"}, }, }, }) @@ -158,7 +158,7 @@ func TestAccRedisInstance_redisInstanceFullWithPersistenceConfigExample(t *testi ResourceName: "google_redis_instance.cache-persis", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"reserved_ip_range", "region"}, + ImportStateVerifyIgnore: []string{"reserved_ip_range", "region", "labels", "terraform_labels"}, }, }, }) @@ -185,7 +185,6 @@ func TestAccRedisInstance_redisInstancePrivateServiceExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "redis-private-service"), "random_suffix": acctest.RandString(t, 10), } @@ -201,7 +200,7 @@ func TestAccRedisInstance_redisInstancePrivateServiceExample(t *testing.T) { ResourceName: "google_redis_instance.cache", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"reserved_ip_range", "region"}, + ImportStateVerifyIgnore: []string{"reserved_ip_range", "region", "labels", "terraform_labels"}, }, }, }) @@ -217,8 +216,8 @@ func testAccRedisInstance_redisInstancePrivateServiceExample(context map[string] // If this network hasn't been created and you are using this example in your // config, add an additional network resource or change // this from "data"to "resource" -data "google_compute_network" "redis-network" { - name = "%{network_name}" +resource "google_compute_network" "redis-network" { + name = "tf-test-redis-test-network%{random_suffix}" } resource "google_compute_global_address" "service_range" { @@ -226,11 +225,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.redis-network.id + network = google_compute_network.redis-network.id } resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.redis-network.id + network = google_compute_network.redis-network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } @@ -243,7 +242,7 @@ resource "google_redis_instance" "cache" { location_id = "us-central1-a" alternative_location_id = "us-central1-f" - authorized_network = data.google_compute_network.redis-network.id + authorized_network = google_compute_network.redis-network.id connect_mode = "PRIVATE_SERVICE_ACCESS" redis_version = "REDIS_4_0" @@ -275,7 +274,7 @@ func TestAccRedisInstance_redisInstanceMrrExample(t *testing.T) { ResourceName: "google_redis_instance.cache", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"reserved_ip_range", "region"}, + ImportStateVerifyIgnore: []string{"reserved_ip_range", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/redis/resource_redis_instance_test.go b/google/services/redis/resource_redis_instance_test.go index 8cd83167d3e..8a40902f0d7 100644 --- a/google/services/redis/resource_redis_instance_test.go +++ b/google/services/redis/resource_redis_instance_test.go @@ -25,17 +25,19 @@ func TestAccRedisInstance_update(t *testing.T) { Config: testAccRedisInstance_update(name, true), }, { - ResourceName: "google_redis_instance.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_redis_instance.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccRedisInstance_update2(name, true), }, { - ResourceName: "google_redis_instance.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_redis_instance.test", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccRedisInstance_update2(name, false), diff --git a/google/services/resourcemanager/data_source_google_folder.go b/google/services/resourcemanager/data_source_google_folder.go index cf0472992d6..5ddb5447bf7 100644 --- a/google/services/resourcemanager/data_source_google_folder.go +++ b/google/services/resourcemanager/data_source_google_folder.go @@ -63,13 +63,14 @@ func dataSourceFolderRead(d *schema.ResourceData, meta interface{}) error { return err } - d.SetId(canonicalFolderName(d.Get("folder").(string))) + id := canonicalFolderName(d.Get("folder").(string)) + d.SetId(id) if err := resourceGoogleFolderRead(d, meta); err != nil { return err } // If resource doesn't exist, read will not set ID and we should return error. if d.Id() == "" { - return nil + return fmt.Errorf("%s not found", id) } if v, ok := d.GetOk("lookup_organization"); ok && v.(bool) { diff --git a/google/services/resourcemanager/data_source_google_folder_organization_policy.go b/google/services/resourcemanager/data_source_google_folder_organization_policy.go index 33d4afc684b..c4feb461575 100644 --- a/google/services/resourcemanager/data_source_google_folder_organization_policy.go +++ b/google/services/resourcemanager/data_source_google_folder_organization_policy.go @@ -24,7 +24,16 @@ func DataSourceGoogleFolderOrganizationPolicy() *schema.Resource { func datasourceGoogleFolderOrganizationPolicyRead(d *schema.ResourceData, meta interface{}) error { - d.SetId(fmt.Sprintf("%s/%s", d.Get("folder"), d.Get("constraint"))) + id := fmt.Sprintf("%s/%s", d.Get("folder"), d.Get("constraint")) + d.SetId(id) - return resourceGoogleFolderOrganizationPolicyRead(d, meta) + err := resourceGoogleFolderOrganizationPolicyRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/resourcemanager/data_source_google_iam_role.go b/google/services/resourcemanager/data_source_google_iam_role.go index 79f08123b89..345c44bd4b5 100644 --- a/google/services/resourcemanager/data_source_google_iam_role.go +++ b/google/services/resourcemanager/data_source_google_iam_role.go @@ -45,7 +45,7 @@ func dataSourceGoogleIamRoleRead(d *schema.ResourceData, meta interface{}) error roleName := d.Get("name").(string) role, err := config.NewIamClient(userAgent).Roles.Get(roleName).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Error reading IAM Role %s: %s", roleName, err)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Error reading IAM Role %s: %s", roleName, err), roleName) } d.SetId(role.Name) diff --git a/google/services/resourcemanager/data_source_google_organization.go b/google/services/resourcemanager/data_source_google_organization.go index 53d9ebdc712..e326c955913 100644 --- a/google/services/resourcemanager/data_source_google_organization.go +++ b/google/services/resourcemanager/data_source_google_organization.go @@ -105,7 +105,7 @@ func dataSourceOrganizationRead(d *schema.ResourceData, meta interface{}) error Timeout: d.Timeout(schema.TimeoutRead), }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Organization Not Found : %s", v)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Organization Not Found : %s", v), canonicalOrganizationName(v.(string))) } organization = resp diff --git a/google/services/resourcemanager/data_source_google_project.go b/google/services/resourcemanager/data_source_google_project.go index 2be4d00773f..4916292a8b3 100644 --- a/google/services/resourcemanager/data_source_google_project.go +++ b/google/services/resourcemanager/data_source_google_project.go @@ -44,6 +44,10 @@ func datasourceGoogleProjectRead(d *schema.ResourceData, meta interface{}) error return err } + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + if d.Id() == "" { return fmt.Errorf("%s not found or not in ACTIVE state", id) } diff --git a/google/services/resourcemanager/data_source_google_project_organization_policy.go b/google/services/resourcemanager/data_source_google_project_organization_policy.go index e38a6230bf3..003a22e2de1 100644 --- a/google/services/resourcemanager/data_source_google_project_organization_policy.go +++ b/google/services/resourcemanager/data_source_google_project_organization_policy.go @@ -24,7 +24,16 @@ func DataSourceGoogleProjectOrganizationPolicy() *schema.Resource { func datasourceGoogleProjectOrganizationPolicyRead(d *schema.ResourceData, meta interface{}) error { - d.SetId(fmt.Sprintf("%s:%s", d.Get("project"), d.Get("constraint"))) + id := fmt.Sprintf("%s:%s", d.Get("project"), d.Get("constraint")) + d.SetId(id) - return resourceGoogleProjectOrganizationPolicyRead(d, meta) + err := resourceGoogleProjectOrganizationPolicyRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/resourcemanager/data_source_google_project_service.go b/google/services/resourcemanager/data_source_google_project_service.go index f053aac54d9..99bfa6c929a 100644 --- a/google/services/resourcemanager/data_source_google_project_service.go +++ b/google/services/resourcemanager/data_source_google_project_service.go @@ -30,5 +30,13 @@ func dataSourceGoogleProjectServiceRead(d *schema.ResourceData, meta interface{} return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceGoogleProjectServiceRead(d, meta) + err = resourceGoogleProjectServiceRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/resourcemanager/data_source_google_project_test.go b/google/services/resourcemanager/data_source_google_project_test.go index dea8cdc70b2..e37139f3600 100644 --- a/google/services/resourcemanager/data_source_google_project_test.go +++ b/google/services/resourcemanager/data_source_google_project_test.go @@ -43,6 +43,9 @@ resource "google_project" "project" { project_id = "%s" name = "%s" org_id = "%s" + labels = { + my-label = "my-label-value" + } } data "google_project" "project" { diff --git a/google/services/resourcemanager/data_source_google_service_account.go b/google/services/resourcemanager/data_source_google_service_account.go index 5d771e0a833..bc17d636dac 100644 --- a/google/services/resourcemanager/data_source_google_service_account.go +++ b/google/services/resourcemanager/data_source_google_service_account.go @@ -61,7 +61,7 @@ func dataSourceGoogleServiceAccountRead(d *schema.ResourceData, meta interface{} sa, err := config.NewIamClient(userAgent).Projects.ServiceAccounts.Get(serviceAccountName).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Service Account %q", serviceAccountName)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Service Account %q", serviceAccountName), serviceAccountName) } d.SetId(sa.Name) diff --git a/google/services/resourcemanager/data_source_google_service_account_key.go b/google/services/resourcemanager/data_source_google_service_account_key.go index 0f7d9d46b37..14af5b1932e 100644 --- a/google/services/resourcemanager/data_source_google_service_account_key.go +++ b/google/services/resourcemanager/data_source_google_service_account_key.go @@ -68,7 +68,7 @@ func dataSourceGoogleServiceAccountKeyRead(d *schema.ResourceData, meta interfac // Confirm the service account key exists sak, err := config.NewIamClient(userAgent).Projects.ServiceAccounts.Keys.Get(keyName).PublicKeyType(publicKeyType).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Service Account Key %q", keyName)) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Service Account Key %q", keyName), keyName) } d.SetId(sak.Name) diff --git a/google/services/resourcemanager/resource_google_project.go b/google/services/resourcemanager/resource_google_project.go index 38f93c59fdb..dd4ef711d57 100644 --- a/google/services/resourcemanager/resource_google_project.go +++ b/google/services/resourcemanager/resource_google_project.go @@ -13,6 +13,7 @@ import ( "time" "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" tpgcompute "github.com/hashicorp/terraform-provider-google/google/services/compute" @@ -42,6 +43,10 @@ func ResourceGoogleProject() *schema.Resource { Update: resourceGoogleProjectUpdate, Delete: resourceGoogleProjectDelete, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Importer: &schema.ResourceImporter{ State: resourceProjectImportState, }, @@ -105,10 +110,27 @@ func ResourceGoogleProject() *schema.Resource { Description: `The alphanumeric ID of the billing account this project belongs to. The user or service account performing this operation with Terraform must have Billing Account Administrator privileges (roles/billing.admin) in the organization. See Google Cloud Billing API Access Control for more details.`, }, "labels": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs to assign to the project. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + }, + + "terraform_labels": { Type: schema.TypeMap, - Optional: true, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, Elem: &schema.Schema{Type: schema.TypeString}, - Description: `A set of key/value label pairs to assign to the project.`, }, }, UseJSONNumber: true, @@ -139,8 +161,8 @@ func resourceGoogleProjectCreate(d *schema.ResourceData, meta interface{}) error return err } - if _, ok := d.GetOk("labels"); ok { - project.Labels = tpgresource.ExpandLabels(d) + if _, ok := d.GetOk("effective_labels"); ok { + project.Labels = tpgresource.ExpandEffectiveLabels(d) } var op *cloudresourcemanager.Operation @@ -288,9 +310,15 @@ func resourceGoogleProjectRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("name", p.Name); err != nil { return fmt.Errorf("Error setting name: %s", err) } - if err := d.Set("labels", p.Labels); err != nil { + if err := tpgresource.SetLabels(p.Labels, d, "labels"); err != nil { return fmt.Errorf("Error setting labels: %s", err) } + if err := tpgresource.SetLabels(p.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", p.Labels); err != nil { + return fmt.Errorf("Error setting effective_labels: %s", err) + } if p.Parent != nil { switch p.Parent.Type { @@ -433,8 +461,8 @@ func resourceGoogleProjectUpdate(d *schema.ResourceData, meta interface{}) error } // Project Labels have changed - if ok := d.HasChange("labels"); ok { - p.Labels = tpgresource.ExpandLabels(d) + if ok := d.HasChange("effective_labels"); ok { + p.Labels = tpgresource.ExpandEffectiveLabels(d) // Do Update on project if p, err = updateProject(config, d, project_name, userAgent, p); err != nil { diff --git a/google/services/resourcemanager/resource_google_project_iam_custom_role.go b/google/services/resourcemanager/resource_google_project_iam_custom_role.go index f91705479d7..919f1008d84 100644 --- a/google/services/resourcemanager/resource_google_project_iam_custom_role.go +++ b/google/services/resourcemanager/resource_google_project_iam_custom_role.go @@ -6,6 +6,7 @@ import ( "fmt" "strings" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -25,6 +26,10 @@ func ResourceGoogleProjectIamCustomRole() *schema.Resource { State: resourceGoogleProjectIamCustomRoleImport, }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "role_id": { Type: schema.TypeString, diff --git a/google/services/resourcemanager/resource_google_project_service.go b/google/services/resourcemanager/resource_google_project_service.go index 68de973111c..721dcc0396b 100644 --- a/google/services/resourcemanager/resource_google_project_service.go +++ b/google/services/resourcemanager/resource_google_project_service.go @@ -8,6 +8,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" tpgserviceusage "github.com/hashicorp/terraform-provider-google/google/services/serviceusage" @@ -95,6 +96,10 @@ func ResourceGoogleProjectService() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "service": { Type: schema.TypeString, diff --git a/google/services/resourcemanager/resource_google_project_test.go b/google/services/resourcemanager/resource_google_project_test.go index 77fc4975ae2..88fb871331b 100644 --- a/google/services/resourcemanager/resource_google_project_test.go +++ b/google/services/resourcemanager/resource_google_project_test.go @@ -138,7 +138,7 @@ func TestAccProject_labels(t *testing.T) { ResourceName: "google_project.acceptance", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"skip_delete"}, + ImportStateVerifyIgnore: []string{"skip_delete", "labels", "terraform_labels"}, }, // update project with labels { diff --git a/google/services/resourcemanager/resource_google_service_account.go b/google/services/resourcemanager/resource_google_service_account.go index a3a6f63952f..443cee3302a 100644 --- a/google/services/resourcemanager/resource_google_service_account.go +++ b/google/services/resourcemanager/resource_google_service_account.go @@ -11,6 +11,7 @@ import ( transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/terraform-provider-google/google/verify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "google.golang.org/api/iam/v1" @@ -28,6 +29,9 @@ func ResourceGoogleServiceAccount() *schema.Resource { Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(5 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "email": { Type: schema.TypeString, diff --git a/google/services/resourcemanager/resource_resource_manager_lien.go b/google/services/resourcemanager/resource_resource_manager_lien.go index 9679cc21045..a1c7fe2c9f7 100644 --- a/google/services/resourcemanager/resource_resource_manager_lien.go +++ b/google/services/resourcemanager/resource_resource_manager_lien.go @@ -310,7 +310,7 @@ func resourceResourceManagerLienDelete(d *schema.ResourceData, meta interface{}) func resourceResourceManagerLienImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/(?P[^/]+)", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/secretmanager/data_source_secret_manager_secret.go b/google/services/secretmanager/data_source_secret_manager_secret.go index e6cd825327b..770cdb2eef1 100644 --- a/google/services/secretmanager/data_source_secret_manager_secret.go +++ b/google/services/secretmanager/data_source_secret_manager_secret.go @@ -28,5 +28,21 @@ func dataSourceSecretManagerSecretRead(d *schema.ResourceData, meta interface{}) return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceSecretManagerSecretRead(d, meta) + err = resourceSecretManagerSecretRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if err := tpgresource.SetDataSourceAnnotations(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/secretmanager/data_source_secret_manager_secret_test.go b/google/services/secretmanager/data_source_secret_manager_secret_test.go index e6722792786..6656fe8ab97 100644 --- a/google/services/secretmanager/data_source_secret_manager_secret_test.go +++ b/google/services/secretmanager/data_source_secret_manager_secret_test.go @@ -24,7 +24,11 @@ func TestAccDataSourceSecretManagerSecret_basic(t *testing.T) { { Config: testAccDataSourceSecretManagerSecret_basic(context), Check: resource.ComposeTestCheckFunc( - acctest.CheckDataSourceStateMatchesResourceState("data.google_secret_manager_secret.foo", "google_secret_manager_secret.bar"), + acctest.CheckDataSourceStateMatchesResourceStateWithIgnores( + "data.google_secret_manager_secret.foo", + "google_secret_manager_secret.bar", + map[string]struct{}{"zone": {}}, + ), ), }, }, @@ -40,6 +44,10 @@ resource "google_secret_manager_secret" "bar" { label = "my-label" } + annotations = { + annotation = "my-annotation" + } + replication { user_managed { replicas { diff --git a/google/services/secretmanager/resource_secret_manager_secret.go b/google/services/secretmanager/resource_secret_manager_secret.go index fd84ac3e019..3f5d7f8b1bc 100644 --- a/google/services/secretmanager/resource_secret_manager_secret.go +++ b/google/services/secretmanager/resource_secret_manager_secret.go @@ -77,6 +77,9 @@ func ResourceSecretManagerSecret() *schema.Resource { CustomizeDiff: customdiff.All( secretManagerSecretAutoCustomizeDiff, + tpgresource.SetLabelsDiff, + tpgresource.SetAnnotationsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -92,6 +95,7 @@ after the Secret has been created.`, "auto": { Type: schema.TypeList, Optional: true, + ForceNew: true, Description: `The Secret will automatically be replicated without any restrictions.`, MaxItems: 1, Elem: &schema.Resource{ @@ -115,14 +119,7 @@ encryption is used.`, }, }, }, - ExactlyOneOf: []string{"replication.0.automatic", "replication.0.user_managed", "replication.0.auto"}, - }, - "automatic": { - Type: schema.TypeBool, - Optional: true, - Deprecated: "`automatic` is deprecated and will be removed in a future major release. Use `auto` instead.", - Description: `The Secret will automatically be replicated without any restrictions.`, - ExactlyOneOf: []string{"replication.0.automatic", "replication.0.user_managed", "replication.0.auto"}, + ExactlyOneOf: []string{"replication.0.user_managed", "replication.0.auto"}, }, "user_managed": { Type: schema.TypeList, @@ -166,7 +163,7 @@ encryption is used.`, }, }, }, - ExactlyOneOf: []string{"replication.0.automatic", "replication.0.user_managed", "replication.0.auto"}, + ExactlyOneOf: []string{"replication.0.user_managed", "replication.0.auto"}, }, }, }, @@ -217,7 +214,11 @@ and must conform to the following PCRE regular expression: [\p{Ll}\p{Lo}\p{N}_-] No more than 64 labels can be assigned to a given resource. An object containing a list of "key": value pairs. Example: -{ "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +{ "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "rotation": { @@ -285,12 +286,31 @@ An object containing a list of "key": value pairs. Example: Computed: true, Description: `The time at which the Secret was created.`, }, + "effective_annotations": { + Type: schema.TypeMap, + Computed: true, + Description: `All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, Description: `The resource name of the Secret. Format: 'projects/{{project}}/secrets/{{secret_id}}'`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -310,18 +330,6 @@ func resourceSecretManagerSecretCreate(d *schema.ResourceData, meta interface{}) } obj := make(map[string]interface{}) - labelsProp, err := expandSecretManagerSecretLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandSecretManagerSecretAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } versionAliasesProp, err := expandSecretManagerSecretVersionAliases(d.Get("version_aliases"), d, config) if err != nil { return err @@ -358,6 +366,18 @@ func resourceSecretManagerSecretCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("rotation"); !tpgresource.IsEmptyValue(reflect.ValueOf(rotationProp)) && (ok || !reflect.DeepEqual(v, rotationProp)) { obj["rotation"] = rotationProp } + labelsProp, err := expandSecretManagerSecretEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandSecretManagerSecretEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(annotationsProp)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{SecretManagerBasePath}}projects/{{project}}/secrets?secretId={{secret_id}}") if err != nil { @@ -473,6 +493,15 @@ func resourceSecretManagerSecretRead(d *schema.ResourceData, meta interface{}) e if err := d.Set("rotation", flattenSecretManagerSecretRotation(res["rotation"], d, config)); err != nil { return fmt.Errorf("Error reading Secret: %s", err) } + if err := d.Set("terraform_labels", flattenSecretManagerSecretTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Secret: %s", err) + } + if err := d.Set("effective_labels", flattenSecretManagerSecretEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Secret: %s", err) + } + if err := d.Set("effective_annotations", flattenSecretManagerSecretEffectiveAnnotations(res["annotations"], d, config)); err != nil { + return fmt.Errorf("Error reading Secret: %s", err) + } return nil } @@ -493,18 +522,6 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandSecretManagerSecretLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } - annotationsProp, err := expandSecretManagerSecretAnnotations(d.Get("annotations"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { - obj["annotations"] = annotationsProp - } versionAliasesProp, err := expandSecretManagerSecretVersionAliases(d.Get("version_aliases"), d, config) if err != nil { return err @@ -529,6 +546,18 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("rotation"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, rotationProp)) { obj["rotation"] = rotationProp } + labelsProp, err := expandSecretManagerSecretEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } + annotationsProp, err := expandSecretManagerSecretEffectiveAnnotations(d.Get("effective_annotations"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_annotations"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, annotationsProp)) { + obj["annotations"] = annotationsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{SecretManagerBasePath}}projects/{{project}}/secrets/{{secret_id}}") if err != nil { @@ -538,14 +567,6 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) log.Printf("[DEBUG] Updating Secret %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - - if d.HasChange("annotations") { - updateMask = append(updateMask, "annotations") - } - if d.HasChange("version_aliases") { updateMask = append(updateMask, "versionAliases") } @@ -561,6 +582,14 @@ func resourceSecretManagerSecretUpdate(d *schema.ResourceData, meta interface{}) if d.HasChange("rotation") { updateMask = append(updateMask, "rotation") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } + + if d.HasChange("effective_annotations") { + updateMask = append(updateMask, "annotations") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -658,9 +687,9 @@ func resourceSecretManagerSecretDelete(d *schema.ResourceData, meta interface{}) func resourceSecretManagerSecretImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/secrets/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/secrets/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -684,11 +713,33 @@ func flattenSecretManagerSecretCreateTime(v interface{}, d *schema.ResourceData, } func flattenSecretManagerSecretLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenSecretManagerSecretAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("annotations"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenSecretManagerSecretVersionAliases(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -704,22 +755,12 @@ func flattenSecretManagerSecretReplication(v interface{}, d *schema.ResourceData return nil } transformed := make(map[string]interface{}) - _, ok := d.GetOk("replication.0.automatic") - if ok { - transformed["automatic"] = - flattenSecretManagerSecretReplicationAutomatic(original["automatic"], d, config) - } else { - transformed["auto"] = - flattenSecretManagerSecretReplicationAuto(original["automatic"], d, config) - } + transformed["auto"] = + flattenSecretManagerSecretReplicationAuto(original["automatic"], d, config) transformed["user_managed"] = flattenSecretManagerSecretReplicationUserManaged(original["userManaged"], d, config) return []interface{}{transformed} } -func flattenSecretManagerSecretReplicationAutomatic(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v != nil -} - func flattenSecretManagerSecretReplicationAuto(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { return nil @@ -849,26 +890,27 @@ func flattenSecretManagerSecretRotationRotationPeriod(v interface{}, d *schema.R return v } -func expandSecretManagerSecretLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenSecretManagerSecretTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed } -func expandSecretManagerSecretAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil +func flattenSecretManagerSecretEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func flattenSecretManagerSecretEffectiveAnnotations(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandSecretManagerSecretVersionAliases(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { @@ -891,22 +933,11 @@ func expandSecretManagerSecretReplication(v interface{}, d tpgresource.Terraform original := raw.(map[string]interface{}) transformed := make(map[string]interface{}) - if _, ok := d.GetOk("replication.0.automatic"); ok { - transformedAutomatic, err := expandSecretManagerSecretReplicationAutomatic(original["automatic"], d, config) - if err != nil { - return nil, err - } else if val := reflect.ValueOf(transformedAutomatic); val.IsValid() && !tpgresource.IsEmptyValue(val) { - transformed["automatic"] = transformedAutomatic - } - } - - if _, ok := d.GetOk("replication.0.auto"); ok { - transformedAuto, err := expandSecretManagerSecretReplicationAuto(original["auto"], d, config) - if err != nil { - return nil, err - } else { - transformed["automatic"] = transformedAuto - } + transformedAuto, err := expandSecretManagerSecretReplicationAuto(original["auto"], d, config) + if err != nil { + return nil, err + } else { + transformed["automatic"] = transformedAuto } transformedUserManaged, err := expandSecretManagerSecretReplicationUserManaged(original["user_managed"], d, config) @@ -919,14 +950,6 @@ func expandSecretManagerSecretReplication(v interface{}, d tpgresource.Terraform return transformed, nil } -func expandSecretManagerSecretReplicationAutomatic(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - if v == nil || !v.(bool) { - return nil, nil - } - - return struct{}{}, nil -} - func expandSecretManagerSecretReplicationAuto(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 { @@ -1116,3 +1139,25 @@ func expandSecretManagerSecretRotationNextRotationTime(v interface{}, d tpgresou func expandSecretManagerSecretRotationRotationPeriod(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandSecretManagerSecretEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + +func expandSecretManagerSecretEffectiveAnnotations(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/secretmanager/resource_secret_manager_secret_generated_test.go b/google/services/secretmanager/resource_secret_manager_secret_generated_test.go index 3f573bcf69c..5d7d0eb01a9 100644 --- a/google/services/secretmanager/resource_secret_manager_secret_generated_test.go +++ b/google/services/secretmanager/resource_secret_manager_secret_generated_test.go @@ -49,7 +49,7 @@ func TestAccSecretManagerSecret_secretConfigBasicExample(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl", "secret_id"}, + ImportStateVerifyIgnore: []string{"ttl", "secret_id", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -97,7 +97,7 @@ func TestAccSecretManagerSecret_secretWithAnnotationsExample(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-with-annotations", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl", "secret_id"}, + ImportStateVerifyIgnore: []string{"ttl", "secret_id", "labels", "annotations", "terraform_labels"}, }, }, }) @@ -147,7 +147,7 @@ func TestAccSecretManagerSecret_secretWithAutomaticCmekExample(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-with-automatic-cmek", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl", "secret_id"}, + ImportStateVerifyIgnore: []string{"ttl", "secret_id", "labels", "annotations", "terraform_labels"}, }, }, }) diff --git a/google/services/secretmanager/resource_secret_manager_secret_test.go b/google/services/secretmanager/resource_secret_manager_secret_test.go index e69def55ae8..0ce2e483007 100644 --- a/google/services/secretmanager/resource_secret_manager_secret_test.go +++ b/google/services/secretmanager/resource_secret_manager_secret_test.go @@ -30,7 +30,7 @@ func TestAccSecretManagerSecret_import(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, }, }) @@ -59,7 +59,7 @@ func TestAccSecretManagerSecret_cmek(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, }, }) @@ -84,7 +84,7 @@ func TestAccSecretManagerSecret_annotationsUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-with-annotations", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels", "annotations"}, }, { Config: testAccSecretManagerSecret_annotationsUpdate(context), @@ -93,7 +93,7 @@ func TestAccSecretManagerSecret_annotationsUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-with-annotations", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels", "annotations"}, }, { Config: testAccSecretManagerSecret_annotationsBasic(context), @@ -102,7 +102,7 @@ func TestAccSecretManagerSecret_annotationsUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-with-annotations", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels", "annotations"}, }, }, }) @@ -127,7 +127,7 @@ func TestAccSecretManagerSecret_versionAliasesUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretManagerSecret_versionAliasesBasic(context), @@ -136,7 +136,7 @@ func TestAccSecretManagerSecret_versionAliasesUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretManagerSecret_versionAliasesUpdate(context), @@ -145,7 +145,7 @@ func TestAccSecretManagerSecret_versionAliasesUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretManagerSecret_basicWithSecretVersions(context), @@ -154,7 +154,7 @@ func TestAccSecretManagerSecret_versionAliasesUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, }, }) @@ -185,7 +185,7 @@ func TestAccSecretManagerSecret_userManagedCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretMangerSecret_userManagedCmekUpdate(context), @@ -194,7 +194,7 @@ func TestAccSecretManagerSecret_userManagedCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretMangerSecret_userManagedCmekUpdate2(context), @@ -203,7 +203,7 @@ func TestAccSecretManagerSecret_userManagedCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretMangerSecret_userManagedCmekBasic(context), @@ -212,7 +212,7 @@ func TestAccSecretManagerSecret_userManagedCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, }, }) @@ -235,15 +235,6 @@ func TestAccSecretManagerSecret_automaticCmekUpdate(t *testing.T) { ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), CheckDestroy: testAccCheckSecretManagerSecretDestroyProducer(t), Steps: []resource.TestStep{ - { - Config: testAccSecretMangerSecret_automaticBasic(context), - }, - { - ResourceName: "google_secret_manager_secret.secret-basic", - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl", "replication.0.automatic", "replication.0.auto"}, - }, { Config: testAccSecretMangerSecret_automaticCmekBasic(context), }, @@ -251,7 +242,7 @@ func TestAccSecretManagerSecret_automaticCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretMangerSecret_automaticCmekUpdate(context), @@ -260,7 +251,7 @@ func TestAccSecretManagerSecret_automaticCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretMangerSecret_automaticCmekUpdate2(context), @@ -269,7 +260,7 @@ func TestAccSecretManagerSecret_automaticCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, { Config: testAccSecretMangerSecret_automaticCmekBasic(context), @@ -278,7 +269,7 @@ func TestAccSecretManagerSecret_automaticCmekUpdate(t *testing.T) { ResourceName: "google_secret_manager_secret.secret-basic", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"ttl"}, + ImportStateVerifyIgnore: []string{"ttl", "labels", "terraform_labels"}, }, }, }) @@ -752,38 +743,6 @@ resource "google_secret_manager_secret" "secret-basic" { `, context) } -func testAccSecretMangerSecret_automaticBasic(context map[string]interface{}) string { - return acctest.Nprintf(` -data "google_project" "project" { - project_id = "%{pid}" -} -resource "google_kms_crypto_key_iam_member" "kms-secret-binding-1" { - crypto_key_id = "%{kms_key_name_1}" - role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-secretmanager.iam.gserviceaccount.com" -} -resource "google_kms_crypto_key_iam_member" "kms-secret-binding-2" { - crypto_key_id = "%{kms_key_name_2}" - role = "roles/cloudkms.cryptoKeyEncrypterDecrypter" - member = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-secretmanager.iam.gserviceaccount.com" -} -resource "google_secret_manager_secret" "secret-basic" { - secret_id = "tf-test-secret-%{random_suffix}" - - labels = { - label = "my-label" - } - replication { - automatic = true - } - depends_on = [ - google_kms_crypto_key_iam_member.kms-secret-binding-1, - google_kms_crypto_key_iam_member.kms-secret-binding-2, - ] -} -`, context) -} - func testAccSecretMangerSecret_automaticCmekBasic(context map[string]interface{}) string { return acctest.Nprintf(` data "google_project" "project" { diff --git a/google/services/securitycenter/resource_scc_folder_custom_module.go b/google/services/securitycenter/resource_scc_folder_custom_module.go index 51c1343341f..e16909a5c55 100644 --- a/google/services/securitycenter/resource_scc_folder_custom_module.go +++ b/google/services/securitycenter/resource_scc_folder_custom_module.go @@ -499,8 +499,8 @@ func resourceSecurityCenterFolderCustomModuleDelete(d *schema.ResourceData, meta func resourceSecurityCenterFolderCustomModuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "folders/(?P[^/]+)/securityHealthAnalyticsSettings/customModules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^folders/(?P[^/]+)/securityHealthAnalyticsSettings/customModules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/securitycenter/resource_scc_organization_custom_module.go b/google/services/securitycenter/resource_scc_organization_custom_module.go index 5934d6b3efb..540b79a42e3 100644 --- a/google/services/securitycenter/resource_scc_organization_custom_module.go +++ b/google/services/securitycenter/resource_scc_organization_custom_module.go @@ -499,8 +499,8 @@ func resourceSecurityCenterOrganizationCustomModuleDelete(d *schema.ResourceData func resourceSecurityCenterOrganizationCustomModuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "organizations/(?P[^/]+)/securityHealthAnalyticsSettings/customModules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^organizations/(?P[^/]+)/securityHealthAnalyticsSettings/customModules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/securitycenter/resource_scc_project_custom_module.go b/google/services/securitycenter/resource_scc_project_custom_module.go index c140db9c93a..b559878b385 100644 --- a/google/services/securitycenter/resource_scc_project_custom_module.go +++ b/google/services/securitycenter/resource_scc_project_custom_module.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceSecurityCenterProjectCustomModule() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "custom_config": { Type: schema.TypeList, @@ -527,9 +532,9 @@ func resourceSecurityCenterProjectCustomModuleDelete(d *schema.ResourceData, met func resourceSecurityCenterProjectCustomModuleImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/securityHealthAnalyticsSettings/customModules/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/securityHealthAnalyticsSettings/customModules/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/servicenetworking/resource_google_service_networking_peered_dns_domain.go b/google/services/servicenetworking/resource_google_service_networking_peered_dns_domain.go index b27f8dbb610..e11fa0c4ea0 100644 --- a/google/services/servicenetworking/resource_google_service_networking_peered_dns_domain.go +++ b/google/services/servicenetworking/resource_google_service_networking_peered_dns_domain.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "google.golang.org/api/servicenetworking/v1" ) @@ -32,6 +33,10 @@ func ResourceGoogleServiceNetworkingPeeredDNSDomain() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "project": { Type: schema.TypeString, diff --git a/google/services/servicenetworking/resource_service_networking_connection.go b/google/services/servicenetworking/resource_service_networking_connection.go index 117d2217ae5..7346d7e08fd 100644 --- a/google/services/servicenetworking/resource_service_networking_connection.go +++ b/google/services/servicenetworking/resource_service_networking_connection.go @@ -10,13 +10,11 @@ import ( "strings" "time" - tpgcompute "github.com/hashicorp/terraform-provider-google/google/services/compute" "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" - "google.golang.org/api/compute/v1" "google.golang.org/api/servicenetworking/v1" ) @@ -96,27 +94,16 @@ func resourceServiceNetworkingConnectionCreate(d *schema.ResourceData, meta inte project := networkFieldValue.Project parentService := formatParentService(d.Get("service").(string)) - // We use Patch instead of Create, because we're getting - // "Error waiting for Create Service Networking Connection: - // Error code 9, message: Cannot modify allocated ranges in - // CreateConnection. Please use UpdateConnection." - // if we're creating peerings to more than one VPC (like two - // CloudSQL instances within one project, peered with two - // clusters.) - // - // This is a workaround for: - // https://issuetracker.google.com/issues/131908322 - // - // The API docs don't specify that you can do connections/-, - // but that's what gcloud does, and it's easier than grabbing - // the connection name. + + // There is no blocker to use Create method, as the bug in CloudSQL has been fixed (https://b.corp.google.com/issues/123276199). + // Read more in https://stackoverflow.com/questions/55135559/unable-to-recreate-private-service-access-on-gcp // err == nil indicates that the billing_project value was found if bp, err := tpgresource.GetBillingProject(d, config); err == nil { project = bp } - createCall := config.NewServiceNetworkingClient(userAgent).Services.Connections.Patch(parentService+"/connections/-", connection).UpdateMask("reservedPeeringRanges").Force(true) + createCall := config.NewServiceNetworkingClient(userAgent).Services.Connections.Create(parentService, connection) if config.UserProjectOverride { createCall.Header().Add("X-Goog-User-Project", project) } @@ -274,42 +261,35 @@ func resourceServiceNetworkingConnectionDelete(d *schema.ResourceData, meta inte return err } - obj := make(map[string]interface{}) - peering := d.Get("peering").(string) - obj["name"] = peering - url := fmt.Sprintf("%s%s/removePeering", config.ComputeBasePath, serviceNetworkingNetworkName) - networkFieldValue, err := tpgresource.ParseNetworkFieldValue(network, d, config) if err != nil { return errwrap.Wrapf("Failed to retrieve network field value, err: {{err}}", err) } project := networkFieldValue.Project - res, err := transport_tpg.SendRequest(transport_tpg.SendRequestOptions{ - Config: config, - Method: "POST", - Project: project, - RawURL: url, - UserAgent: userAgent, - Body: obj, - Timeout: d.Timeout(schema.TimeoutDelete), - }) + connectionId, err := parseConnectionId(d.Id()) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("ServiceNetworkingConnection %q", d.Id())) + return errwrap.Wrapf("Unable to parse Service Networking Connection id, err: {{err}}", err) } + parentService := formatParentService(connectionId.Service) - op := &compute.Operation{} - err = tpgresource.Convert(res, op) - if err != nil { - return err + deleteConnectionRequest := &servicenetworking.DeleteConnectionRequest{ + ConsumerNetwork: serviceNetworkingNetworkName, } - err = tpgcompute.ComputeOperationWaitTime( - config, op, project, "Updating Network", userAgent, d.Timeout(schema.TimeoutDelete)) + deleteCall := config.NewServiceNetworkingClient(userAgent).Services.Connections.DeleteConnection(parentService+"/connections/servicenetworking-googleapis-com", deleteConnectionRequest) + if config.UserProjectOverride { + deleteCall.Header().Add("X-Goog-User-Project", project) + } + op, err := deleteCall.Do() if err != nil { return err } + if err := ServiceNetworkingOperationWaitTime(config, op, "Delete Service Networking Connection", userAgent, project, d.Timeout(schema.TimeoutCreate)); err != nil { + return errwrap.Wrapf("Unable to remove Service Networking Connection, err: {{err}}", err) + } + d.SetId("") log.Printf("[INFO] Service network connection removed.") diff --git a/google/services/servicenetworking/resource_service_networking_connection_test.go b/google/services/servicenetworking/resource_service_networking_connection_test.go index 6cb7fc99106..c38c9a9c24f 100644 --- a/google/services/servicenetworking/resource_service_networking_connection_test.go +++ b/google/services/servicenetworking/resource_service_networking_connection_test.go @@ -15,7 +15,7 @@ import ( func TestAccServiceNetworkingConnection_create(t *testing.T) { t.Parallel() - network := acctest.BootstrapSharedTestNetwork(t, "service-networking-connection-create") + network := fmt.Sprintf("tf-test-service-networking-connection-create-%s", acctest.RandString(t, 10)) addr := fmt.Sprintf("tf-test-%s", acctest.RandString(t, 10)) service := "servicenetworking.googleapis.com" @@ -39,7 +39,7 @@ func TestAccServiceNetworkingConnection_create(t *testing.T) { func TestAccServiceNetworkingConnection_update(t *testing.T) { t.Parallel() - network := acctest.BootstrapSharedTestNetwork(t, "service-networking-connection-update") + network := fmt.Sprintf("tf-test-service-networking-connection-update-%s", acctest.RandString(t, 10)) addr1 := fmt.Sprintf("tf-test-%s", acctest.RandString(t, 10)) addr2 := fmt.Sprintf("tf-test-%s", acctest.RandString(t, 10)) service := "servicenetworking.googleapis.com" @@ -96,7 +96,7 @@ func testServiceNetworkingConnectionDestroy(t *testing.T, parent, network string func testAccServiceNetworkingConnection(networkName, addressRangeName, serviceName string) string { return fmt.Sprintf(` -data "google_compute_network" "servicenet" { +resource "google_compute_network" "servicenet" { name = "%s" } @@ -105,11 +105,11 @@ resource "google_compute_global_address" "foobar" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.servicenet.self_link + network = google_compute_network.servicenet.self_link } resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link + network = google_compute_network.servicenet.self_link service = "%s" reserved_peering_ranges = [google_compute_global_address.foobar.name] } diff --git a/google/services/sourcerepo/data_source_sourcerepo_repository.go b/google/services/sourcerepo/data_source_sourcerepo_repository.go index 5c27099ccbf..03484b9dabe 100644 --- a/google/services/sourcerepo/data_source_sourcerepo_repository.go +++ b/google/services/sourcerepo/data_source_sourcerepo_repository.go @@ -33,5 +33,13 @@ func dataSourceGoogleSourceRepoRepositoryRead(d *schema.ResourceData, meta inter } d.SetId(id) - return resourceSourceRepoRepositoryRead(d, meta) + err = resourceSourceRepoRepositoryRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/sourcerepo/resource_sourcerepo_repository.go b/google/services/sourcerepo/resource_sourcerepo_repository.go index 40f1d7b09a9..227407e260f 100644 --- a/google/services/sourcerepo/resource_sourcerepo_repository.go +++ b/google/services/sourcerepo/resource_sourcerepo_repository.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -66,6 +67,10 @@ func ResourceSourceRepoRepository() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -372,8 +377,8 @@ func resourceSourceRepoRepositoryDelete(d *schema.ResourceData, meta interface{} func resourceSourceRepoRepositoryImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/repos/(?P.+)", - "(?P.+)", + "^projects/(?P[^/]+)/repos/(?P.+)$", + "^(?P.+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/spanner/data_source_spanner_instance.go b/google/services/spanner/data_source_spanner_instance.go index 5b77e82f547..255079f1504 100644 --- a/google/services/spanner/data_source_spanner_instance.go +++ b/google/services/spanner/data_source_spanner_instance.go @@ -34,5 +34,17 @@ func dataSourceSpannerInstanceRead(d *schema.ResourceData, meta interface{}) err } d.SetId(id) - return resourceSpannerInstanceRead(d, meta) + err = resourceSpannerInstanceRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/spanner/resource_spanner_database.go b/google/services/spanner/resource_spanner_database.go index 4ebf594ddee..50a5d375515 100644 --- a/google/services/spanner/resource_spanner_database.go +++ b/google/services/spanner/resource_spanner_database.go @@ -140,6 +140,7 @@ func ResourceSpannerDatabase() *schema.Resource { CustomizeDiff: customdiff.All( resourceSpannerDBDdlCustomDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -797,10 +798,10 @@ func resourceSpannerDatabaseDelete(d *schema.ResourceData, meta interface{}) err func resourceSpannerDatabaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/instances/(?P[^/]+)/databases/(?P[^/]+)", - "instances/(?P[^/]+)/databases/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/instances/(?P[^/]+)/databases/(?P[^/]+)$", + "^instances/(?P[^/]+)/databases/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/spanner/resource_spanner_instance.go b/google/services/spanner/resource_spanner_instance.go index b3fe2d0d384..594c7c88be2 100644 --- a/google/services/spanner/resource_spanner_instance.go +++ b/google/services/spanner/resource_spanner_instance.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -107,6 +108,11 @@ func ResourceSpannerInstance() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "config": { Type: schema.TypeString, @@ -143,7 +149,11 @@ If not provided, a random string starting with 'tf-' will be selected.`, Type: schema.TypeMap, Optional: true, Description: `An object containing a list of "key": value pairs. -Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.`, +Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, Elem: &schema.Schema{Type: schema.TypeString}, }, "num_nodes": { @@ -162,11 +172,24 @@ must be present in terraform.`, or node_count must be present in terraform.`, ExactlyOneOf: []string{"num_nodes", "processing_units"}, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "state": { Type: schema.TypeString, Computed: true, Description: `Instance status: 'CREATING' or 'READY'.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "force_destroy": { Type: schema.TypeBool, Optional: true, @@ -223,10 +246,10 @@ func resourceSpannerInstanceCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("processing_units"); !tpgresource.IsEmptyValue(reflect.ValueOf(processingUnitsProp)) && (ok || !reflect.DeepEqual(v, processingUnitsProp)) { obj["processingUnits"] = processingUnitsProp } - labelsProp, err := expandSpannerInstanceLabels(d.Get("labels"), d, config) + labelsProp, err := expandSpannerInstanceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -395,6 +418,12 @@ func resourceSpannerInstanceRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("state", flattenSpannerInstanceState(res["state"], d, config)); err != nil { return fmt.Errorf("Error reading Instance: %s", err) } + if err := d.Set("terraform_labels", flattenSpannerInstanceTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } + if err := d.Set("effective_labels", flattenSpannerInstanceEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Instance: %s", err) + } return nil } @@ -433,10 +462,10 @@ func resourceSpannerInstanceUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("processing_units"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, processingUnitsProp)) { obj["processingUnits"] = processingUnitsProp } - labelsProp, err := expandSpannerInstanceLabels(d.Get("labels"), d, config) + labelsProp, err := expandSpannerInstanceEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -564,9 +593,9 @@ func resourceSpannerInstanceDelete(d *schema.ResourceData, meta interface{}) err func resourceSpannerInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -639,13 +668,43 @@ func flattenSpannerInstanceProcessingUnits(v interface{}, d *schema.ResourceData } func flattenSpannerInstanceLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenSpannerInstanceState(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } +func flattenSpannerInstanceTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenSpannerInstanceEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandSpannerInstanceName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -676,7 +735,7 @@ func expandSpannerInstanceProcessingUnits(v interface{}, d tpgresource.Terraform return v, nil } -func expandSpannerInstanceLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandSpannerInstanceEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/spanner/resource_spanner_instance_generated_test.go b/google/services/spanner/resource_spanner_instance_generated_test.go index cb4cf9f65eb..bc667b5a3db 100644 --- a/google/services/spanner/resource_spanner_instance_generated_test.go +++ b/google/services/spanner/resource_spanner_instance_generated_test.go @@ -50,7 +50,7 @@ func TestAccSpannerInstance_spannerInstanceBasicExample(t *testing.T) { ResourceName: "google_spanner_instance.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"config"}, + ImportStateVerifyIgnore: []string{"config", "labels", "terraform_labels"}, }, }, }) @@ -89,7 +89,7 @@ func TestAccSpannerInstance_spannerInstanceProcessingUnitsExample(t *testing.T) ResourceName: "google_spanner_instance.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"config"}, + ImportStateVerifyIgnore: []string{"config", "labels", "terraform_labels"}, }, }, }) @@ -128,7 +128,7 @@ func TestAccSpannerInstance_spannerInstanceMultiRegionalExample(t *testing.T) { ResourceName: "google_spanner_instance.example", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"config"}, + ImportStateVerifyIgnore: []string{"config", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/spanner/resource_spanner_instance_test.go b/google/services/spanner/resource_spanner_instance_test.go index 917d05a2b80..436f1b2abfd 100644 --- a/google/services/spanner/resource_spanner_instance_test.go +++ b/google/services/spanner/resource_spanner_instance_test.go @@ -97,17 +97,19 @@ func TestAccSpannerInstance_update(t *testing.T) { Config: testAccSpannerInstance_update(dName1, 1, false), }, { - ResourceName: "google_spanner_instance.updater", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_spanner_instance.updater", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, { Config: testAccSpannerInstance_update(dName2, 2, true), }, { - ResourceName: "google_spanner_instance.updater", - ImportState: true, - ImportStateVerify: true, + ResourceName: "google_spanner_instance.updater", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"labels", "terraform_labels"}, }, }, }) diff --git a/google/services/sql/data_source_sql_database.go b/google/services/sql/data_source_sql_database.go index 037739c98c6..4fee85d0d86 100644 --- a/google/services/sql/data_source_sql_database.go +++ b/google/services/sql/data_source_sql_database.go @@ -29,11 +29,15 @@ func dataSourceSqlDatabaseRead(d *schema.ResourceData, meta interface{}) error { if err != nil { return fmt.Errorf("Error fetching project for Database: %s", err) } - d.SetId(fmt.Sprintf("projects/%s/instances/%s/databases/%s", project, d.Get("instance").(string), d.Get("name").(string))) + id := fmt.Sprintf("projects/%s/instances/%s/databases/%s", project, d.Get("instance").(string), d.Get("name").(string)) + d.SetId(id) err = resourceSQLDatabaseRead(d, meta) if err != nil { return err } + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } if err := d.Set("deletion_policy", nil); err != nil { return fmt.Errorf("Error setting deletion_policy: %s", err) } diff --git a/google/services/sql/data_source_sql_database_instance.go b/google/services/sql/data_source_sql_database_instance.go index 81e2d15fdd8..cc44717538c 100644 --- a/google/services/sql/data_source_sql_database_instance.go +++ b/google/services/sql/data_source_sql_database_instance.go @@ -3,6 +3,8 @@ package sql import ( + "fmt" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" ) @@ -20,7 +22,15 @@ func DataSourceSqlDatabaseInstance() *schema.Resource { } func dataSourceSqlDatabaseInstanceRead(d *schema.ResourceData, meta interface{}) error { + id := d.Get("name").(string) + err := resourceSqlDatabaseInstanceRead(d, meta) + if err != nil { + return err + } - return resourceSqlDatabaseInstanceRead(d, meta) + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/sql/data_source_sql_databases.go b/google/services/sql/data_source_sql_databases.go index 09482be1b57..cee05a66418 100644 --- a/google/services/sql/data_source_sql_databases.go +++ b/google/services/sql/data_source_sql_databases.go @@ -62,7 +62,7 @@ func dataSourceSqlDatabasesRead(d *schema.ResourceData, meta interface{}) error }) if err != nil { - return transport_tpg.HandleNotFoundError(err, d, fmt.Sprintf("Databases in %q instance", d.Get("instance").(string))) + return transport_tpg.HandleDataSourceNotFoundError(err, d, fmt.Sprintf("Databases in %q instance", d.Get("instance").(string)), fmt.Sprintf("Databases in %q instance", d.Get("instance").(string))) } flattenedDatabases := flattenDatabases(databases.Items) diff --git a/google/services/sql/resource_sql_database.go b/google/services/sql/resource_sql_database.go index ea46ef8cb97..471d069b282 100644 --- a/google/services/sql/resource_sql_database.go +++ b/google/services/sql/resource_sql_database.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -46,6 +47,10 @@ func ResourceSQLDatabase() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "instance": { Type: schema.TypeString, @@ -420,11 +425,11 @@ func resourceSQLDatabaseDelete(d *schema.ResourceData, meta interface{}) error { func resourceSQLDatabaseImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/instances/(?P[^/]+)/databases/(?P[^/]+)", - "instances/(?P[^/]+)/databases/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/instances/(?P[^/]+)/databases/(?P[^/]+)$", + "^instances/(?P[^/]+)/databases/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/sql/resource_sql_database_instance.go b/google/services/sql/resource_sql_database_instance.go index 0a46a0d3e5d..c27c233dbc3 100644 --- a/google/services/sql/resource_sql_database_instance.go +++ b/google/services/sql/resource_sql_database_instance.go @@ -46,6 +46,21 @@ var sqlDatabaseAuthorizedNetWorkSchemaElem *schema.Resource = &schema.Resource{ }, } +var sqlDatabaseFlagSchemaElem *schema.Resource = &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeString, + Required: true, + Description: `Value of the flag.`, + }, + "name": { + Type: schema.TypeString, + Required: true, + Description: `Name of the flag.`, + }, + }, +} + var ( backupConfigurationKeys = []string{ "settings.0.backup_configuration.0.binary_log_enabled", @@ -119,6 +134,7 @@ func ResourceSqlDatabaseInstance() *schema.Resource { }, CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, customdiff.ForceNewIfChange("settings.0.disk_size", compute.IsDiskShrinkage), customdiff.ForceNewIfChange("master_instance_name", isMasterInstanceNameSet), customdiff.IfValueChange("instance_type", isReplicaPromoteRequested, checkPromoteConfigurationsAndUpdateDiff), @@ -361,22 +377,10 @@ is set to true. Defaults to ZONAL.`, Description: `The name of server instance collation.`, }, "database_flags": { - Type: schema.TypeList, + Type: schema.TypeSet, Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "value": { - Type: schema.TypeString, - Required: true, - Description: `Value of the flag.`, - }, - "name": { - Type: schema.TypeString, - Required: true, - Description: `Name of the flag.`, - }, - }, - }, + Set: schema.HashResource(sqlDatabaseFlagSchemaElem), + Elem: sqlDatabaseFlagSchemaElem, }, "disk_autoresize": { Type: schema.TypeBool, @@ -1265,7 +1269,7 @@ func expandSqlDatabaseInstanceSettings(configured []interface{}, databaseVersion DeletionProtectionEnabled: _settings["deletion_protection_enabled"].(bool), UserLabels: tpgresource.ConvertStringMap(_settings["user_labels"].(map[string]interface{})), BackupConfiguration: expandBackupConfiguration(_settings["backup_configuration"].([]interface{})), - DatabaseFlags: expandDatabaseFlags(_settings["database_flags"].([]interface{})), + DatabaseFlags: expandDatabaseFlags(_settings["database_flags"].(*schema.Set).List()), IpConfiguration: expandIpConfiguration(_settings["ip_configuration"].([]interface{}), databaseVersion), LocationPreference: expandLocationPreference(_settings["location_preference"].([]interface{})), MaintenanceWindow: expandMaintenanceWindow(_settings["maintenance_window"].([]interface{})), diff --git a/google/services/sql/resource_sql_database_instance_test.go b/google/services/sql/resource_sql_database_instance_test.go index 3d3d0f3daf1..210da6408ed 100644 --- a/google/services/sql/resource_sql_database_instance_test.go +++ b/google/services/sql/resource_sql_database_instance_test.go @@ -180,8 +180,12 @@ func TestAccSqlDatabaseInstance_deleteDefaultUserBeforeSubsequentApiCalls(t *tes t.Parallel() databaseName := "tf-test-" + acctest.RandString(t, 10) - addressName := "tf-test-" + acctest.RandString(t, 10) - networkName := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-network-clone-2") + testId := "sql-instance-clone-2" + networkName := acctest.BootstrapSharedTestNetwork(t, testId) + projectNumber := envvar.GetTestProjectNumberFromEnv() + networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + addressName := acctest.BootstrapSharedTestGlobalAddress(t, testId, networkId) + acctest.BootstrapSharedServiceNetworkingConnection(t, testId) // 1. Create an instance. // 2. Add a root@'%' user. @@ -193,7 +197,7 @@ func TestAccSqlDatabaseInstance_deleteDefaultUserBeforeSubsequentApiCalls(t *tes CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, addressName, false, false), + Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, false, false), }, { PreConfig: func() { @@ -738,8 +742,7 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(t *te t.Parallel() databaseName := "tf-test-" + acctest.RandString(t, 10) - addressName := "tf-test-" + acctest.RandString(t, 10) - networkName := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-network") + networkName := acctest.BootstrapSharedServiceNetworkingConnection(t, "sql-instance-1") acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -747,7 +750,7 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(t *te CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, addressName, false, false), + Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, false, false), }, { ResourceName: "google_sql_database_instance.instance", @@ -756,7 +759,7 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(t *te ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { - Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, addressName, true, false), + Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, true, false), }, { ResourceName: "google_sql_database_instance.instance", @@ -765,7 +768,7 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(t *te ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { - Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, addressName, true, true), + Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, true, true), }, { ResourceName: "google_sql_database_instance.instance", @@ -774,7 +777,7 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(t *te ImportStateVerifyIgnore: []string{"deletion_protection"}, }, { - Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, addressName, true, false), + Config: testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, true, false), }, { ResourceName: "google_sql_database_instance.instance", @@ -977,10 +980,19 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRange(t *testi t.Parallel() databaseName := "tf-test-" + acctest.RandString(t, 10) - addressName := "tf-test-" + acctest.RandString(t, 10) - networkName := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-network-allocated") - addressName_update := "tf-test-" + acctest.RandString(t, 10) + "update" - networkName_update := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-network-allocated-update") + + projectNumber := envvar.GetTestProjectNumberFromEnv() + testId := "sql-instance-allocated-1" + networkName := acctest.BootstrapSharedTestNetwork(t, testId) + networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + addressName := acctest.BootstrapSharedTestGlobalAddress(t, testId, networkId) + acctest.BootstrapSharedServiceNetworkingConnection(t, testId) + + updateTestId := "sql-instance-allocated-update-1" + networkName_update := acctest.BootstrapSharedTestNetwork(t, updateTestId) + networkId_update := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName_update) + addressName_update := acctest.BootstrapSharedTestGlobalAddress(t, updateTestId, networkId_update) + acctest.BootstrapSharedServiceNetworkingConnection(t, updateTestId) acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -1014,8 +1026,13 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRangeReplica(t t.Parallel() databaseName := "tf-test-" + acctest.RandString(t, 10) - addressName := "tf-test-" + acctest.RandString(t, 10) - networkName := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-network-replica") + + projectNumber := envvar.GetTestProjectNumberFromEnv() + testId := "sql-instance-replica-1" + networkName := acctest.BootstrapSharedTestNetwork(t, testId) + networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + addressName := acctest.BootstrapSharedTestGlobalAddress(t, testId, networkId) + acctest.BootstrapSharedServiceNetworkingConnection(t, testId) acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -1046,8 +1063,12 @@ func TestAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRangeClone(t * t.Parallel() databaseName := "tf-test-" + acctest.RandString(t, 10) - addressName := "tf-test-" + acctest.RandString(t, 10) - networkName := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-network-clone") + projectNumber := envvar.GetTestProjectNumberFromEnv() + testId := "sql-instance-clone-1" + networkName := acctest.BootstrapSharedTestNetwork(t, testId) + networkId := fmt.Sprintf("projects/%v/global/networks/%v", projectNumber, networkName) + addressName := acctest.BootstrapSharedTestGlobalAddress(t, testId, networkId) + acctest.BootstrapSharedServiceNetworkingConnection(t, testId) acctest.VcrTest(t, resource.TestCase{ PreCheck: func() { acctest.AccTestPreCheck(t) }, @@ -1447,8 +1468,7 @@ func TestAccSqlDatabaseInstance_ActiveDirectory(t *testing.T) { t.Parallel() databaseName := "tf-test-" + acctest.RandString(t, 10) - networkName := acctest.BootstrapSharedTestNetwork(t, "sql-instance-private-test-ad") - addressName := "tf-test-" + acctest.RandString(t, 10) + networkName := acctest.BootstrapSharedServiceNetworkingConnection(t, "sql-instance-ad-1") rootPassword := acctest.RandString(t, 15) adDomainName := acctest.BootstrapSharedTestADDomain(t, "test-domain", networkName) @@ -1458,7 +1478,7 @@ func TestAccSqlDatabaseInstance_ActiveDirectory(t *testing.T) { CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), Steps: []resource.TestStep{ { - Config: testGoogleSqlDatabaseInstance_ActiveDirectoryConfig(databaseName, networkName, addressName, rootPassword, adDomainName), + Config: testGoogleSqlDatabaseInstance_ActiveDirectoryConfig(databaseName, networkName, rootPassword, adDomainName), }, { ResourceName: "google_sql_database_instance.instance-with-ad", @@ -1679,6 +1699,34 @@ func TestAccSqlDatabaseInstance_Timezone(t *testing.T) { }) } +func TestAccSqlDatabaseInstance_updateDifferentFlagOrder(t *testing.T) { + t.Parallel() + + instance := "tf-test-" + acctest.RandString(t, 10) + + acctest.VcrTest(t, resource.TestCase{ + PreCheck: func() { acctest.AccTestPreCheck(t) }, + ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t), + CheckDestroy: testAccSqlDatabaseInstanceDestroyProducer(t), + Steps: []resource.TestStep{ + { + Config: testGoogleSqlDatabaseInstance_flags(instance), + }, + { + ResourceName: "google_sql_database_instance.instance", + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"deletion_protection"}, + }, + { + Config: testGoogleSqlDatabaseInstance_flags_update(instance), + PlanOnly: true, + ExpectNonEmptyPlan: false, + }, + }, + }) +} + func TestAccSqlDatabaseInstance_sqlMysqlInstancePvpExample(t *testing.T) { t.Parallel() @@ -2148,28 +2196,13 @@ resource "google_sql_database_instance" "instance" { } ` -func testGoogleSqlDatabaseInstance_ActiveDirectoryConfig(databaseName, networkName, addressRangeName, rootPassword, adDomainName string) string { +func testGoogleSqlDatabaseInstance_ActiveDirectoryConfig(databaseName, networkName, rootPassword, adDomainName string) string { return fmt.Sprintf(` data "google_compute_network" "servicenet" { name = "%s" } -resource "google_compute_global_address" "foobar" { - name = "%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.servicenet.self_link -} - -resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.foobar.name] -} - resource "google_sql_database_instance" "instance-with-ad" { - depends_on = [google_service_networking_connection.foobar] name = "%s" region = "us-central1" database_version = "SQLSERVER_2017_STANDARD" @@ -2186,7 +2219,7 @@ resource "google_sql_database_instance" "instance-with-ad" { domain = "%s" } } -}`, networkName, addressRangeName, databaseName, rootPassword, adDomainName) +}`, networkName, databaseName, rootPassword, adDomainName) } func testGoogleSqlDatabaseInstance_DenyMaintenancePeriodConfig(databaseName, endDate, startDate, time string) string { @@ -2777,7 +2810,7 @@ func verifyPscOperation(resourceName string, isPscConfigExpected bool, expectedP } } -func testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName, addressRangeName string, specifyPrivatePathOption bool, enablePrivatePath bool) string { +func testAccSqlDatabaseInstance_withPrivateNetwork_withoutAllocatedIpRange(databaseName, networkName string, specifyPrivatePathOption bool, enablePrivatePath bool) string { privatePathOption := "" if specifyPrivatePathOption { privatePathOption = fmt.Sprintf("enable_private_path_for_google_cloud_services = %t", enablePrivatePath) @@ -2788,22 +2821,7 @@ data "google_compute_network" "servicenet" { name = "%s" } -resource "google_compute_global_address" "foobar" { - name = "%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.servicenet.self_link -} - -resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.foobar.name] -} - resource "google_sql_database_instance" "instance" { - depends_on = [google_service_networking_connection.foobar] name = "%s" region = "us-central1" database_version = "MYSQL_5_7" @@ -2817,7 +2835,7 @@ resource "google_sql_database_instance" "instance" { } } } -`, networkName, addressRangeName, databaseName, privatePathOption) +`, networkName, databaseName, privatePathOption) } func testAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRange(databaseName, networkName, addressRangeName string) string { @@ -2826,22 +2844,7 @@ data "google_compute_network" "servicenet" { name = "%s" } -resource "google_compute_global_address" "foobar" { - name = "%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.servicenet.self_link -} - -resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.foobar.name] -} - resource "google_sql_database_instance" "instance" { - depends_on = [google_service_networking_connection.foobar] name = "%s" region = "us-central1" database_version = "MYSQL_5_7" @@ -2851,11 +2854,11 @@ resource "google_sql_database_instance" "instance" { ip_configuration { ipv4_enabled = "false" private_network = data.google_compute_network.servicenet.self_link - allocated_ip_range = google_compute_global_address.foobar.name + allocated_ip_range = "%s" } } } -`, networkName, addressRangeName, databaseName) +`, networkName, databaseName, addressRangeName) } func testAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRangeReplica(databaseName, networkName, addressRangeName string) string { @@ -2864,22 +2867,7 @@ data "google_compute_network" "servicenet" { name = "%s" } -resource "google_compute_global_address" "foobar" { - name = "%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.servicenet.self_link -} - -resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.foobar.name] -} - resource "google_sql_database_instance" "instance" { - depends_on = [google_service_networking_connection.foobar] name = "%s" region = "us-central1" database_version = "MYSQL_5_7" @@ -2898,7 +2886,6 @@ resource "google_sql_database_instance" "instance" { } } resource "google_sql_database_instance" "replica1" { - depends_on = [google_service_networking_connection.foobar] name = "%s-replica1" region = "us-central1" database_version = "MYSQL_5_7" @@ -2908,7 +2895,7 @@ resource "google_sql_database_instance" "replica1" { ip_configuration { ipv4_enabled = "false" private_network = data.google_compute_network.servicenet.self_link - allocated_ip_range = google_compute_global_address.foobar.name + allocated_ip_range = "%s" } } @@ -2923,7 +2910,7 @@ resource "google_sql_database_instance" "replica1" { verify_server_certificate = false } } -`, networkName, addressRangeName, databaseName, databaseName) +`, networkName, databaseName, databaseName, addressRangeName) } func testAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRangeClone(databaseName, networkName, addressRangeName string) string { @@ -2932,22 +2919,7 @@ data "google_compute_network" "servicenet" { name = "%s" } -resource "google_compute_global_address" "foobar" { - name = "%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.servicenet.self_link -} - -resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.foobar.name] -} - resource "google_sql_database_instance" "instance" { - depends_on = [google_service_networking_connection.foobar] name = "%s" region = "us-central1" database_version = "MYSQL_5_7" @@ -2974,11 +2946,11 @@ resource "google_sql_database_instance" "clone1" { clone { source_instance_name = google_sql_database_instance.instance.name - allocated_ip_range = google_compute_global_address.foobar.name + allocated_ip_range = "%s" } } -`, networkName, addressRangeName, databaseName, databaseName) +`, networkName, databaseName, databaseName, addressRangeName) } func testAccSqlDatabaseInstance_withPrivateNetwork_withAllocatedIpRangeClone_withSettings(databaseName, networkName, addressRangeName string) string { @@ -2987,22 +2959,7 @@ data "google_compute_network" "servicenet" { name = "%s" } -resource "google_compute_global_address" "foobar" { - name = "%s" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 16 - network = data.google_compute_network.servicenet.self_link -} - -resource "google_service_networking_connection" "foobar" { - network = data.google_compute_network.servicenet.self_link - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.foobar.name] -} - resource "google_sql_database_instance" "instance" { - depends_on = [google_service_networking_connection.foobar] name = "%s" region = "us-central1" database_version = "MYSQL_5_7" @@ -3029,7 +2986,7 @@ resource "google_sql_database_instance" "clone1" { clone { source_instance_name = google_sql_database_instance.instance.name - allocated_ip_range = google_compute_global_address.foobar.name + allocated_ip_range = "%s" } settings { @@ -3039,7 +2996,7 @@ resource "google_sql_database_instance" "clone1" { } } } -`, networkName, addressRangeName, databaseName, databaseName) +`, networkName, databaseName, databaseName, addressRangeName) } var testGoogleSqlDatabaseInstance_settings = ` @@ -3845,6 +3802,50 @@ func checkInstanceTypeIsPresent(resourceName string) func(*terraform.State) erro } } +func testGoogleSqlDatabaseInstance_flags(instance string) string { + return fmt.Sprintf(` +resource "google_sql_database_instance" "instance" { + name = "%s" + region = "us-central1" + database_version = "MYSQL_5_7" + deletion_protection = false + settings { + tier = "db-f1-micro" + + database_flags { + name = "character_set_server" + value = "utf8mb4" + } + database_flags { + name = "auto_increment_increment" + value = "2" + } + } +}`, instance) +} + +func testGoogleSqlDatabaseInstance_flags_update(instance string) string { + return fmt.Sprintf(` +resource "google_sql_database_instance" "instance" { + name = "%s" + region = "us-central1" + database_version = "MYSQL_5_7" + deletion_protection = false + settings { + tier = "db-f1-micro" + + database_flags { + name = "auto_increment_increment" + value = "2" + } + database_flags { + name = "character_set_server" + value = "utf8mb4" + } + } +}`, instance) +} + func testGoogleSqlDatabaseInstance_readReplica(instance string) string { return fmt.Sprintf(` resource "google_sql_database_instance" "master" { diff --git a/google/services/sql/resource_sql_source_representation_instance.go b/google/services/sql/resource_sql_source_representation_instance.go index 9a66b6eafb7..fa3c050e0e6 100644 --- a/google/services/sql/resource_sql_source_representation_instance.go +++ b/google/services/sql/resource_sql_source_representation_instance.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -50,6 +51,10 @@ func ResourceSQLSourceRepresentationInstance() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "database_version": { Type: schema.TypeString, @@ -365,9 +370,9 @@ func resourceSQLSourceRepresentationInstanceDelete(d *schema.ResourceData, meta func resourceSQLSourceRepresentationInstanceImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/instances/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/instances/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/sql/resource_sql_ssl_cert.go b/google/services/sql/resource_sql_ssl_cert.go index 91abc16d8b8..f078b3b38ea 100644 --- a/google/services/sql/resource_sql_ssl_cert.go +++ b/google/services/sql/resource_sql_ssl_cert.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-provider-google/google/tpgresource" transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" sqladmin "google.golang.org/api/sqladmin/v1beta4" ) @@ -27,6 +28,10 @@ func ResourceSqlSslCert() *schema.Resource { Delete: schema.DefaultTimeout(10 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "common_name": { Type: schema.TypeString, diff --git a/google/services/sql/resource_sql_user.go b/google/services/sql/resource_sql_user.go index 6ca23f91e61..7d97d4e3e83 100644 --- a/google/services/sql/resource_sql_user.go +++ b/google/services/sql/resource_sql_user.go @@ -12,6 +12,7 @@ import ( transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/errwrap" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" sqladmin "google.golang.org/api/sqladmin/v1beta4" @@ -58,6 +59,10 @@ func ResourceSqlUser() *schema.Resource { Delete: schema.DefaultTimeout(10 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + SchemaVersion: 1, MigrateState: resourceSqlUserMigrateState, diff --git a/google/services/storage/data_source_google_storage_project_service_account.go b/google/services/storage/data_source_google_storage_project_service_account.go index eac2ae55f08..46f5351b66a 100644 --- a/google/services/storage/data_source_google_storage_project_service_account.go +++ b/google/services/storage/data_source_google_storage_project_service_account.go @@ -57,7 +57,7 @@ func dataSourceGoogleStorageProjectServiceAccountRead(d *schema.ResourceData, me serviceAccount, err := serviceAccountGetRequest.Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "GCS service account not found") + return transport_tpg.HandleDataSourceNotFoundError(err, d, "GCS service account not found", fmt.Sprintf("Project %q GCS service account", project)) } if err := d.Set("project", project); err != nil { diff --git a/google/services/storage/resource_storage_bucket.go b/google/services/storage/resource_storage_bucket.go index e5d67dd2538..bffbddb6e17 100644 --- a/google/services/storage/resource_storage_bucket.go +++ b/google/services/storage/resource_storage_bucket.go @@ -38,6 +38,7 @@ func ResourceStorageBucket() *schema.Resource { }, CustomizeDiff: customdiff.All( customdiff.ForceNewIfChange("retention_policy.0.is_locked", isPolicyLocked), + tpgresource.SetLabelsDiff, ), Timeouts: &schema.ResourceTimeout{ @@ -46,6 +47,15 @@ func ResourceStorageBucket() *schema.Resource { Read: schema.DefaultTimeout(4 * time.Minute), }, + SchemaVersion: 1, + StateUpgraders: []schema.StateUpgrader{ + { + Type: resourceStorageBucketV0().CoreConfigSchema().ImpliedType(), + Upgrade: ResourceStorageBucketStateUpgradeV0, + Version: 0, + }, + }, + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -84,13 +94,24 @@ func ResourceStorageBucket() *schema.Resource { }, "labels": { - Type: schema.TypeMap, - Optional: true, - Computed: true, - // GCP (Dataplex) automatically adds labels - DiffSuppressFunc: resourceDataplexLabelDiffSuppress, - Elem: &schema.Schema{Type: schema.TypeString}, - Description: `A set of key/value label pairs to assign to the bucket.`, + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs to assign to the bucket.`, + }, + + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "location": { @@ -443,7 +464,8 @@ func ResourceStorageBucket() *schema.Resource { } } -const resourceDataplexGoogleProvidedLabelPrefix = "labels.goog-dataplex" +const resourceDataplexGoogleLabelPrefix = "goog-dataplex" +const resourceDataplexGoogleProvidedLabelPrefix = "labels." + resourceDataplexGoogleLabelPrefix func resourceDataplexLabelDiffSuppress(k, old, new string, d *schema.ResourceData) bool { if strings.HasPrefix(k, resourceDataplexGoogleProvidedLabelPrefix) && new == "" { @@ -495,7 +517,7 @@ func resourceStorageBucketCreate(d *schema.ResourceData, meta interface{}) error // Create a bucket, setting the labels, location and name. sb := &storage.Bucket{ Name: bucket, - Labels: tpgresource.ExpandLabels(d), + Labels: tpgresource.ExpandEffectiveLabels(d), Location: location, IamConfiguration: expandIamConfiguration(d), } @@ -696,15 +718,15 @@ func resourceStorageBucketUpdate(d *schema.ResourceData, meta interface{}) error } } - if d.HasChange("labels") { - sb.Labels = tpgresource.ExpandLabels(d) + if d.HasChange("effective_labels") { + sb.Labels = tpgresource.ExpandEffectiveLabels(d) if len(sb.Labels) == 0 { sb.NullFields = append(sb.NullFields, "Labels") } // To delete a label using PATCH, we have to explicitly set its value // to null. - old, _ := d.GetChange("labels") + old, _ := d.GetChange("effective_labels") for k := range old.(map[string]interface{}) { if _, ok := sb.Labels[k]; !ok { sb.NullFields = append(sb.NullFields, fmt.Sprintf("Labels.%s", k)) @@ -1598,7 +1620,13 @@ func setStorageBucket(d *schema.ResourceData, config *transport_tpg.Config, res if err := d.Set("lifecycle_rule", flattenBucketLifecycle(res.Lifecycle)); err != nil { return fmt.Errorf("Error setting lifecycle_rule: %s", err) } - if err := d.Set("labels", res.Labels); err != nil { + if err := tpgresource.SetLabels(res.Labels, d, "labels"); err != nil { + return fmt.Errorf("Error setting labels: %s", err) + } + if err := tpgresource.SetLabels(res.Labels, d, "terraform_labels"); err != nil { + return fmt.Errorf("Error setting terraform_labels: %s", err) + } + if err := d.Set("effective_labels", res.Labels); err != nil { return fmt.Errorf("Error setting labels: %s", err) } if err := d.Set("website", flattenBucketWebsite(res.Website)); err != nil { diff --git a/google/services/storage/resource_storage_bucket_access_control.go b/google/services/storage/resource_storage_bucket_access_control.go index ee300a0bcd6..871b2678c6b 100644 --- a/google/services/storage/resource_storage_bucket_access_control.go +++ b/google/services/storage/resource_storage_bucket_access_control.go @@ -333,7 +333,7 @@ func resourceStorageBucketAccessControlDelete(d *schema.ResourceData, meta inter func resourceStorageBucketAccessControlImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/(?P[^/]+)", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/storage/resource_storage_bucket_migrate.go b/google/services/storage/resource_storage_bucket_migrate.go new file mode 100644 index 00000000000..1ec15203aa0 --- /dev/null +++ b/google/services/storage/resource_storage_bucket_migrate.go @@ -0,0 +1,417 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package storage + +import ( + "context" + "math" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + + "github.com/hashicorp/terraform-provider-google/google/tpgresource" +) + +func resourceStorageBucketV0() *schema.Resource { + return &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + Description: `The name of the bucket.`, + }, + + "encryption": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "default_kms_key_name": { + Type: schema.TypeString, + Required: true, + Description: `A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified. You must pay attention to whether the crypto key is available in the location that this bucket is created in. See the docs for more details.`, + }, + }, + }, + Description: `The bucket's encryption configuration.`, + }, + + "requester_pays": { + Type: schema.TypeBool, + Optional: true, + Description: `Enables Requester Pays on a storage bucket.`, + }, + + "force_destroy": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, Terraform will fail that run.`, + }, + + "labels": { + Type: schema.TypeMap, + Optional: true, + Computed: true, + // GCP (Dataplex) automatically adds labels + DiffSuppressFunc: resourceDataplexLabelDiffSuppress, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `A set of key/value label pairs to assign to the bucket.`, + }, + + "location": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + StateFunc: func(s interface{}) string { + return strings.ToUpper(s.(string)) + }, + Description: `The Google Cloud Storage location`, + }, + + "project": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + Description: `The ID of the project in which the resource belongs. If it is not provided, the provider project is used.`, + }, + + "self_link": { + Type: schema.TypeString, + Computed: true, + Description: `The URI of the created resource.`, + }, + + "url": { + Type: schema.TypeString, + Computed: true, + Description: `The base URL of the bucket, in the format gs://.`, + }, + + "storage_class": { + Type: schema.TypeString, + Optional: true, + Default: "STANDARD", + Description: `The Storage Class of the new bucket. Supported values include: STANDARD, MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, ARCHIVE.`, + }, + + "lifecycle_rule": { + Type: schema.TypeList, + Optional: true, + MaxItems: 100, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 1, + Set: resourceGCSBucketLifecycleRuleActionHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + Description: `The type of the action of this Lifecycle Rule. Supported values include: Delete, SetStorageClass and AbortIncompleteMultipartUpload.`, + }, + "storage_class": { + Type: schema.TypeString, + Optional: true, + Description: `The target Storage Class of objects affected by this Lifecycle Rule. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, ARCHIVE.`, + }, + }, + }, + Description: `The Lifecycle Rule's action configuration. A single block of this type is supported.`, + }, + "condition": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 1, + Set: resourceGCSBucketLifecycleRuleConditionHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "age": { + Type: schema.TypeInt, + Optional: true, + Description: `Minimum age of an object in days to satisfy this condition.`, + }, + "created_before": { + Type: schema.TypeString, + Optional: true, + Description: `Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition.`, + }, + "custom_time_before": { + Type: schema.TypeString, + Optional: true, + Description: `Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition.`, + }, + "days_since_custom_time": { + Type: schema.TypeInt, + Optional: true, + Description: `Number of days elapsed since the user-specified timestamp set on an object.`, + }, + "days_since_noncurrent_time": { + Type: schema.TypeInt, + Optional: true, + Description: `Number of days elapsed since the noncurrent timestamp of an object. This + condition is relevant only for versioned objects.`, + }, + "noncurrent_time_before": { + Type: schema.TypeString, + Optional: true, + Description: `Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition.`, + }, + "with_state": { + Type: schema.TypeString, + Computed: true, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{"LIVE", "ARCHIVED", "ANY", ""}, false), + Description: `Match to live and/or archived objects. Unversioned buckets have only live objects. Supported values include: "LIVE", "ARCHIVED", "ANY".`, + }, + "matches_storage_class": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `Storage Class of objects to satisfy this condition. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, ARCHIVE, STANDARD, DURABLE_REDUCED_AVAILABILITY.`, + }, + "num_newer_versions": { + Type: schema.TypeInt, + Optional: true, + Description: `Relevant only for versioned objects. The number of newer versions of an object to satisfy this condition.`, + }, + "matches_prefix": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `One or more matching name prefixes to satisfy this condition.`, + }, + "matches_suffix": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Description: `One or more matching name suffixes to satisfy this condition.`, + }, + }, + }, + Description: `The Lifecycle Rule's condition configuration.`, + }, + }, + }, + Description: `The bucket's Lifecycle Rules configuration.`, + }, + + "versioning": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `While set to true, versioning is fully enabled for this bucket.`, + }, + }, + }, + Description: `The bucket's Versioning configuration.`, + }, + + "autoclass": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + Description: `While set to true, autoclass automatically transitions objects in your bucket to appropriate storage classes based on each object's access pattern.`, + }, + }, + }, + Description: `The bucket's autoclass configuration.`, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + _, n := d.GetChange(strings.TrimSuffix(k, ".#")) + if !strings.HasSuffix(k, ".#") { + return false + } + var l []interface{} + if new == "1" && old == "0" { + l = n.([]interface{}) + contents, ok := l[0].(map[string]interface{}) + if !ok { + return false + } + if contents["enabled"] == false { + return true + } + } + return false + }, + }, + "website": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "main_page_suffix": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: []string{"website.0.not_found_page", "website.0.main_page_suffix"}, + Description: `Behaves as the bucket's directory index where missing objects are treated as potential directories.`, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + return old != "" && new == "" + }, + }, + "not_found_page": { + Type: schema.TypeString, + Optional: true, + AtLeastOneOf: []string{"website.0.main_page_suffix", "website.0.not_found_page"}, + Description: `The custom object to return when a requested resource is not found.`, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + return old != "" && new == "" + }, + }, + }, + }, + Description: `Configuration if the bucket acts as a website.`, + }, + + "retention_policy": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "is_locked": { + Type: schema.TypeBool, + Optional: true, + Default: false, + Description: `If set to true, the bucket will be locked and permanently restrict edits to the bucket's retention policy. Caution: Locking a bucket is an irreversible action.`, + }, + "retention_period": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, math.MaxInt32), + Description: `The period of time, in seconds, that objects in the bucket must be retained and cannot be deleted, overwritten, or archived. The value must be less than 3,155,760,000 seconds.`, + }, + }, + }, + Description: `Configuration of the bucket's data retention policy for how long objects in the bucket should be retained.`, + }, + + "cors": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "origin": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".`, + }, + "method": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list of methods, and means "any method".`, + }, + "response_header": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.`, + }, + "max_age_seconds": { + Type: schema.TypeInt, + Optional: true, + Description: `The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.`, + }, + }, + }, + Description: `The bucket's Cross-Origin Resource Sharing (CORS) configuration.`, + }, + + "default_event_based_hold": { + Type: schema.TypeBool, + Optional: true, + Description: `Whether or not to automatically apply an eventBasedHold to new objects added to the bucket.`, + }, + + "logging": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "log_bucket": { + Type: schema.TypeString, + Required: true, + Description: `The bucket that will receive log objects.`, + }, + "log_object_prefix": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `The object prefix for log objects. If it's not provided, by default Google Cloud Storage sets this to this bucket's name.`, + }, + }, + }, + Description: `The bucket's Access & Storage Logs configuration.`, + }, + "uniform_bucket_level_access": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + Description: `Enables uniform bucket-level access on a bucket.`, + }, + "custom_placement_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "data_locations": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + MaxItems: 2, + MinItems: 2, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + Description: `The list of individual regions that comprise a dual-region bucket. See the docs for a list of acceptable regions. Note: If any of the data_locations changes, it will recreate the bucket.`, + }, + }, + }, + Description: `The bucket's custom location configuration, which specifies the individual regions that comprise a dual-region bucket. If the bucket is designated a single or multi-region, the parameters are empty.`, + }, + "public_access_prevention": { + Type: schema.TypeString, + Optional: true, + Computed: true, + Description: `Prevents public access to a bucket.`, + }, + }, + UseJSONNumber: true, + } +} + +func ResourceStorageBucketStateUpgradeV0(_ context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) { + return tpgresource.LabelsStateUpgrade(rawState, resourceDataplexGoogleLabelPrefix) +} diff --git a/google/services/storage/resource_storage_bucket_test.go b/google/services/storage/resource_storage_bucket_test.go index a39b14bbb60..c1f2264f8fb 100644 --- a/google/services/storage/resource_storage_bucket_test.go +++ b/google/services/storage/resource_storage_bucket_test.go @@ -979,7 +979,7 @@ func TestAccStorageBucket_labels(t *testing.T) { ResourceName: "google_storage_bucket.bucket", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"force_destroy"}, + ImportStateVerifyIgnore: []string{"force_destroy", "labels", "terraform_labels"}, }, // Down to only one label (test single label deletion) { @@ -989,7 +989,7 @@ func TestAccStorageBucket_labels(t *testing.T) { ResourceName: "google_storage_bucket.bucket", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"force_destroy"}, + ImportStateVerifyIgnore: []string{"force_destroy", "labels", "terraform_labels"}, }, // And make sure deleting all labels work { @@ -999,7 +999,7 @@ func TestAccStorageBucket_labels(t *testing.T) { ResourceName: "google_storage_bucket.bucket", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"force_destroy"}, + ImportStateVerifyIgnore: []string{"force_destroy", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/storage/resource_storage_default_object_access_control.go b/google/services/storage/resource_storage_default_object_access_control.go index 390bd24265f..8ed2128dd1f 100644 --- a/google/services/storage/resource_storage_default_object_access_control.go +++ b/google/services/storage/resource_storage_default_object_access_control.go @@ -381,7 +381,7 @@ func resourceStorageDefaultObjectAccessControlDelete(d *schema.ResourceData, met func resourceStorageDefaultObjectAccessControlImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/(?P[^/]+)", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/storage/resource_storage_hmac_key.go b/google/services/storage/resource_storage_hmac_key.go index 4e5cd987063..0e475c478a8 100644 --- a/google/services/storage/resource_storage_hmac_key.go +++ b/google/services/storage/resource_storage_hmac_key.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,10 @@ func ResourceStorageHmacKey() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "service_account_email": { Type: schema.TypeString, @@ -480,9 +485,9 @@ func resourceStorageHmacKeyDelete(d *schema.ResourceData, meta interface{}) erro func resourceStorageHmacKeyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/hmacKeys/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/hmacKeys/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/storage/resource_storage_object_access_control.go b/google/services/storage/resource_storage_object_access_control.go index 241e340cbab..2423583e6ed 100644 --- a/google/services/storage/resource_storage_object_access_control.go +++ b/google/services/storage/resource_storage_object_access_control.go @@ -384,7 +384,7 @@ func resourceStorageObjectAccessControlDelete(d *schema.ResourceData, meta inter func resourceStorageObjectAccessControlImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "(?P[^/]+)/(?P.+)/(?P[^/]+)", + "^(?P[^/]+)/(?P.+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/storageinsights/resource_storage_insights_report_config.go b/google/services/storageinsights/resource_storage_insights_report_config.go index 82336661039..d6c309bf91b 100644 --- a/google/services/storageinsights/resource_storage_insights_report_config.go +++ b/google/services/storageinsights/resource_storage_insights_report_config.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceStorageInsightsReportConfig() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "csv_options": { Type: schema.TypeList, @@ -506,9 +511,9 @@ func resourceStorageInsightsReportConfigDelete(d *schema.ResourceData, meta inte func resourceStorageInsightsReportConfigImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/reportConfigs/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/reportConfigs/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/storagetransfer/data_source_google_storage_transfer_project_service_account.go b/google/services/storagetransfer/data_source_google_storage_transfer_project_service_account.go index 2961665fb26..ae839bed30d 100644 --- a/google/services/storagetransfer/data_source_google_storage_transfer_project_service_account.go +++ b/google/services/storagetransfer/data_source_google_storage_transfer_project_service_account.go @@ -49,7 +49,7 @@ func dataSourceGoogleStorageTransferProjectServiceAccountRead(d *schema.Resource serviceAccount, err := config.NewStorageTransferClient(userAgent).GoogleServiceAccounts.Get(project).Do() if err != nil { - return transport_tpg.HandleNotFoundError(err, d, "Google Cloud Storage Transfer service account not found") + return transport_tpg.HandleDataSourceNotFoundError(err, d, "Google Cloud Storage Transfer service account not found", fmt.Sprintf("Project %q Google Cloud Storage Transfer account", project)) } d.SetId(serviceAccount.AccountEmail) diff --git a/google/services/storagetransfer/resource_storage_transfer_agent_pool.go b/google/services/storagetransfer/resource_storage_transfer_agent_pool.go index 69fe6c3e1f5..7a2442a7821 100644 --- a/google/services/storagetransfer/resource_storage_transfer_agent_pool.go +++ b/google/services/storagetransfer/resource_storage_transfer_agent_pool.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -69,6 +70,10 @@ func ResourceStorageTransferAgentPool() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -369,9 +374,9 @@ func resourceStorageTransferAgentPoolDelete(d *schema.ResourceData, meta interfa func resourceStorageTransferAgentPoolImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/agentPools/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/agentPools/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/storagetransfer/resource_storage_transfer_job.go b/google/services/storagetransfer/resource_storage_transfer_job.go index 1f5dc43acd9..26920da65ff 100644 --- a/google/services/storagetransfer/resource_storage_transfer_job.go +++ b/google/services/storagetransfer/resource_storage_transfer_job.go @@ -13,6 +13,7 @@ import ( transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/terraform-provider-google/google/verify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -63,6 +64,10 @@ func ResourceStorageTransferJob() *schema.Resource { State: resourceStorageTransferJobStateImporter, }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, diff --git a/google/services/tags/resource_tags_tag_key.go b/google/services/tags/resource_tags_tag_key.go index 667394d9b92..87f6ccc881e 100644 --- a/google/services/tags/resource_tags_tag_key.go +++ b/google/services/tags/resource_tags_tag_key.go @@ -416,8 +416,8 @@ func resourceTagsTagKeyDelete(d *schema.ResourceData, meta interface{}) error { func resourceTagsTagKeyImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "tagKeys/(?P[^/]+)", - "(?P[^/]+)", + "^tagKeys/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/tags/resource_tags_tag_value.go b/google/services/tags/resource_tags_tag_value.go index f4e2d8ce6b1..4290dba3d28 100644 --- a/google/services/tags/resource_tags_tag_value.go +++ b/google/services/tags/resource_tags_tag_value.go @@ -380,8 +380,8 @@ func resourceTagsTagValueDelete(d *schema.ResourceData, meta interface{}) error func resourceTagsTagValueImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "tagValues/(?P[^/]+)", - "(?P[^/]+)", + "^tagValues/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/tpu/resource_tpu_node.go b/google/services/tpu/resource_tpu_node.go index ced68cde973..32ab4c2697c 100644 --- a/google/services/tpu/resource_tpu_node.go +++ b/google/services/tpu/resource_tpu_node.go @@ -102,6 +102,8 @@ func ResourceTPUNode() *schema.Resource { CustomizeDiff: customdiff.All( tpuNodeCustomizeDiff, + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, ), Schema: map[string]*schema.Schema{ @@ -145,11 +147,14 @@ is peered with another network that is using that CIDR block.`, Description: `The user-supplied description of the TPU. Maximum of 512 characters.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - ForceNew: true, - Description: `Resource labels to represent user provided metadata.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + ForceNew: true, + Description: `Resource labels to represent user provided metadata. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "network": { Type: schema.TypeString, @@ -199,6 +204,13 @@ TPU Node to is a Shared VPC network, the node must be created with this this fie ForceNew: true, Description: `The GCP location for the TPU. If it is not provided, the provider zone is used.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + ForceNew: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "network_endpoints": { Type: schema.TypeList, Computed: true, @@ -228,6 +240,13 @@ node. To share resources, including Google Cloud Storage data, with the Tensorflow job running in the Node, this account must have permissions to that data.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "project": { Type: schema.TypeString, Optional: true, @@ -295,10 +314,10 @@ func resourceTPUNodeCreate(d *schema.ResourceData, meta interface{}) error { } else if v, ok := d.GetOkExists("scheduling_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(schedulingConfigProp)) && (ok || !reflect.DeepEqual(v, schedulingConfigProp)) { obj["schedulingConfig"] = schedulingConfigProp } - labelsProp, err := expandTPUNodeLabels(d.Get("labels"), d, config) + labelsProp, err := expandTPUNodeEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -443,6 +462,12 @@ func resourceTPUNodeRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("labels", flattenTPUNodeLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Node: %s", err) } + if err := d.Set("terraform_labels", flattenTPUNodeTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Node: %s", err) + } + if err := d.Set("effective_labels", flattenTPUNodeEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Node: %s", err) + } return nil } @@ -568,10 +593,10 @@ func resourceTPUNodeDelete(d *schema.ResourceData, meta interface{}) error { func resourceTPUNodeImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/nodes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/nodes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -679,6 +704,36 @@ func flattenTPUNodeNetworkEndpointsPort(v interface{}, d *schema.ResourceData, c } func flattenTPUNodeLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenTPUNodeTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenTPUNodeEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -733,7 +788,7 @@ func expandTPUNodeSchedulingConfigPreemptible(v interface{}, d tpgresource.Terra return v, nil } -func expandTPUNodeLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandTPUNodeEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/tpu/resource_tpu_node_generated_test.go b/google/services/tpu/resource_tpu_node_generated_test.go index 94b5a12c080..683ffb26bce 100644 --- a/google/services/tpu/resource_tpu_node_generated_test.go +++ b/google/services/tpu/resource_tpu_node_generated_test.go @@ -49,7 +49,7 @@ func TestAccTPUNode_tpuNodeBasicExample(t *testing.T) { ResourceName: "google_tpu_node.tpu", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"zone"}, + ImportStateVerifyIgnore: []string{"zone", "labels", "terraform_labels"}, }, }, }) @@ -91,7 +91,7 @@ func TestAccTPUNode_tpuNodeFullExample(t *testing.T) { ResourceName: "google_tpu_node.tpu", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"zone"}, + ImportStateVerifyIgnore: []string{"zone", "labels", "terraform_labels"}, }, }, }) @@ -124,8 +124,8 @@ resource "google_tpu_node" "tpu" { } } -data "google_compute_network" "network" { - name = "default" +resource "google_compute_network" "network" { + name = "tf-test-tpu-node-network%{random_suffix}" } resource "google_compute_global_address" "service_range" { @@ -133,11 +133,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.network.id + network = google_compute_network.network.id } resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.network.id + network = google_compute_network.network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } diff --git a/google/services/vertexai/data_source_vertex_ai_index.go b/google/services/vertexai/data_source_vertex_ai_index.go index 2ce6cbd1472..7ba9ae28e0c 100644 --- a/google/services/vertexai/data_source_vertex_ai_index.go +++ b/google/services/vertexai/data_source_vertex_ai_index.go @@ -31,5 +31,17 @@ func dataSourceVertexAIIndexRead(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("Error constructing id: %s", err) } d.SetId(id) - return resourceVertexAIIndexRead(d, meta) + err = resourceVertexAIIndexRead(d, meta) + if err != nil { + return err + } + + if err := tpgresource.SetDataSourceLabels(d); err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/vertexai/resource_vertex_ai_dataset.go b/google/services/vertexai/resource_vertex_ai_dataset.go index 163554f3aaa..c85cffa2cfe 100644 --- a/google/services/vertexai/resource_vertex_ai_dataset.go +++ b/google/services/vertexai/resource_vertex_ai_dataset.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -43,6 +44,11 @@ func ResourceVertexAIDataset() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -74,11 +80,14 @@ Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/ }, }, "labels": { - Type: schema.TypeMap, - Computed: true, - Optional: true, - Description: `A set of key/value label pairs to assign to this Workflow.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this Workflow. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "region": { Type: schema.TypeString, @@ -92,11 +101,24 @@ Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/ Computed: true, Description: `The timestamp of when the dataset was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, Description: `The resource name of the Dataset. This value is set by Google.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -127,12 +149,6 @@ func resourceVertexAIDatasetCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(displayNameProp)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandVertexAIDatasetLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } encryptionSpecProp, err := expandVertexAIDatasetEncryptionSpec(d.Get("encryption_spec"), d, config) if err != nil { return err @@ -145,6 +161,12 @@ func resourceVertexAIDatasetCreate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("metadata_schema_uri"); !tpgresource.IsEmptyValue(reflect.ValueOf(metadataSchemaUriProp)) && (ok || !reflect.DeepEqual(v, metadataSchemaUriProp)) { obj["metadataSchemaUri"] = metadataSchemaUriProp } + labelsProp, err := expandVertexAIDatasetEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{VertexAIBasePath}}projects/{{project}}/locations/{{region}}/datasets") if err != nil { @@ -275,6 +297,12 @@ func resourceVertexAIDatasetRead(d *schema.ResourceData, meta interface{}) error if err := d.Set("metadata_schema_uri", flattenVertexAIDatasetMetadataSchemaUri(res["metadataSchemaUri"], d, config)); err != nil { return fmt.Errorf("Error reading Dataset: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIDatasetTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Dataset: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIDatasetEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Dataset: %s", err) + } return nil } @@ -301,10 +329,10 @@ func resourceVertexAIDatasetUpdate(d *schema.ResourceData, meta interface{}) err } else if v, ok := d.GetOkExists("display_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, displayNameProp)) { obj["displayName"] = displayNameProp } - labelsProp, err := expandVertexAIDatasetLabels(d.Get("labels"), d, config) + labelsProp, err := expandVertexAIDatasetEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -320,7 +348,7 @@ func resourceVertexAIDatasetUpdate(d *schema.ResourceData, meta interface{}) err updateMask = append(updateMask, "displayName") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -424,7 +452,18 @@ func flattenVertexAIDatasetUpdateTime(v interface{}, d *schema.ResourceData, con } func flattenVertexAIDatasetLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIDatasetEncryptionSpec(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -448,19 +487,27 @@ func flattenVertexAIDatasetMetadataSchemaUri(v interface{}, d *schema.ResourceDa return v } -func expandVertexAIDatasetDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandVertexAIDatasetLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenVertexAIDatasetTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenVertexAIDatasetEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandVertexAIDatasetDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandVertexAIDatasetEncryptionSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -489,3 +536,14 @@ func expandVertexAIDatasetEncryptionSpecKmsKeyName(v interface{}, d tpgresource. func expandVertexAIDatasetMetadataSchemaUri(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandVertexAIDatasetEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/vertexai/resource_vertex_ai_dataset_generated_test.go b/google/services/vertexai/resource_vertex_ai_dataset_generated_test.go index 6a9520c4cc2..9b1037b7e97 100644 --- a/google/services/vertexai/resource_vertex_ai_dataset_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_dataset_generated_test.go @@ -55,6 +55,10 @@ resource "google_vertex_ai_dataset" "dataset" { display_name = "terraform%{random_suffix}" metadata_schema_uri = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" region = "us-central1" + + labels = { + env = "test" + } } `, context) } diff --git a/google/services/vertexai/resource_vertex_ai_endpoint.go b/google/services/vertexai/resource_vertex_ai_endpoint.go index 4af6a85da0e..7fec62f3b09 100644 --- a/google/services/vertexai/resource_vertex_ai_endpoint.go +++ b/google/services/vertexai/resource_vertex_ai_endpoint.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceVertexAIEndpoint() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -88,10 +94,13 @@ func ResourceVertexAIEndpoint() *schema.Resource { }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "network": { Type: schema.TypeString, @@ -274,6 +283,12 @@ func ResourceVertexAIEndpoint() *schema.Resource { }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -284,6 +299,13 @@ func ResourceVertexAIEndpoint() *schema.Resource { Computed: true, Description: `Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by CreateModelDeploymentMonitoringJob. Format: 'projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}'`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -320,12 +342,6 @@ func resourceVertexAIEndpointCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAIEndpointLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } encryptionSpecProp, err := expandVertexAIEndpointEncryptionSpec(d.Get("encryption_spec"), d, config) if err != nil { return err @@ -338,6 +354,12 @@ func resourceVertexAIEndpointCreate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("network"); !tpgresource.IsEmptyValue(reflect.ValueOf(networkProp)) && (ok || !reflect.DeepEqual(v, networkProp)) { obj["network"] = networkProp } + labelsProp, err := expandVertexAIEndpointEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{VertexAIBasePath}}projects/{{project}}/locations/{{location}}/endpoints?endpointId={{name}}") if err != nil { @@ -470,6 +492,12 @@ func resourceVertexAIEndpointRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("model_deployment_monitoring_job", flattenVertexAIEndpointModelDeploymentMonitoringJob(res["modelDeploymentMonitoringJob"], d, config)); err != nil { return fmt.Errorf("Error reading Endpoint: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIEndpointTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Endpoint: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIEndpointEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Endpoint: %s", err) + } return nil } @@ -502,10 +530,10 @@ func resourceVertexAIEndpointUpdate(d *schema.ResourceData, meta interface{}) er } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAIEndpointLabels(d.Get("labels"), d, config) + labelsProp, err := expandVertexAIEndpointEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -525,7 +553,7 @@ func resourceVertexAIEndpointUpdate(d *schema.ResourceData, meta interface{}) er updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -615,9 +643,9 @@ func resourceVertexAIEndpointDelete(d *schema.ResourceData, meta interface{}) er func resourceVertexAIEndpointImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/endpoints/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/endpoints/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -925,7 +953,18 @@ func flattenVertexAIEndpointDeployedModelsEnableContainerLogging(v interface{}, } func flattenVertexAIEndpointLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIEndpointCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -961,6 +1000,25 @@ func flattenVertexAIEndpointModelDeploymentMonitoringJob(v interface{}, d *schem return v } +func flattenVertexAIEndpointTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenVertexAIEndpointEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandVertexAIEndpointDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -969,17 +1027,6 @@ func expandVertexAIEndpointDescription(v interface{}, d tpgresource.TerraformRes return v, nil } -func expandVertexAIEndpointLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandVertexAIEndpointEncryptionSpec(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { l := v.([]interface{}) if len(l) == 0 || l[0] == nil { @@ -1006,3 +1053,14 @@ func expandVertexAIEndpointEncryptionSpecKmsKeyName(v interface{}, d tpgresource func expandVertexAIEndpointNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandVertexAIEndpointEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/vertexai/resource_vertex_ai_endpoint_test.go b/google/services/vertexai/resource_vertex_ai_endpoint_test.go index da6204e4abf..f4951c20aee 100644 --- a/google/services/vertexai/resource_vertex_ai_endpoint_test.go +++ b/google/services/vertexai/resource_vertex_ai_endpoint_test.go @@ -20,7 +20,7 @@ func TestAccVertexAIEndpoint_vertexAiEndpointNetwork(t *testing.T) { context := map[string]interface{}{ "endpoint_name": fmt.Sprint(acctest.RandInt(t) % 9999999999), "kms_key_name": acctest.BootstrapKMSKeyInLocation(t, "us-central1").CryptoKey.Name, - "network_name": acctest.BootstrapSharedTestNetwork(t, "vertex-ai-endpoint-update"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "vertex-ai-endpoint-update-1"), "random_suffix": acctest.RandString(t, 10), } @@ -36,7 +36,7 @@ func TestAccVertexAIEndpoint_vertexAiEndpointNetwork(t *testing.T) { ResourceName: "google_vertex_ai_endpoint.endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "location", "region"}, + ImportStateVerifyIgnore: []string{"etag", "location", "region", "labels", "terraform_labels"}, }, { Config: testAccVertexAIEndpoint_vertexAiEndpointNetworkUpdate(context), @@ -45,7 +45,7 @@ func TestAccVertexAIEndpoint_vertexAiEndpointNetwork(t *testing.T) { ResourceName: "google_vertex_ai_endpoint.endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "location", "region"}, + ImportStateVerifyIgnore: []string{"etag", "location", "region", "labels", "terraform_labels"}, }, }, }) @@ -66,23 +66,6 @@ resource "google_vertex_ai_endpoint" "endpoint" { encryption_spec { kms_key_name = "%{kms_key_name}" } - depends_on = [ - google_service_networking_connection.vertex_vpc_connection - ] -} - -resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.vertex_range.name] -} - -resource "google_compute_global_address" "vertex_range" { - name = "tf-test-address-name%{random_suffix}" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 24 - network = data.google_compute_network.vertex_network.id } data "google_compute_network" "vertex_network" { @@ -114,23 +97,6 @@ resource "google_vertex_ai_endpoint" "endpoint" { encryption_spec { kms_key_name = "%{kms_key_name}" } - depends_on = [ - google_service_networking_connection.vertex_vpc_connection - ] -} - -resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.vertex_range.name] -} - -resource "google_compute_global_address" "vertex_range" { - name = "tf-test-address-name%{random_suffix}" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 24 - network = data.google_compute_network.vertex_network.id } data "google_compute_network" "vertex_network" { diff --git a/google/services/vertexai/resource_vertex_ai_featurestore.go b/google/services/vertexai/resource_vertex_ai_featurestore.go index 05687e4f88a..589d86f2b3b 100644 --- a/google/services/vertexai/resource_vertex_ai_featurestore.go +++ b/google/services/vertexai/resource_vertex_ai_featurestore.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceVertexAIFeaturestore() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "encryption_spec": { Type: schema.TypeList, @@ -64,10 +70,14 @@ func ResourceVertexAIFeaturestore() *schema.Resource { }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to this Featurestore.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this Featurestore. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "name": { Type: schema.TypeString, @@ -124,11 +134,24 @@ func ResourceVertexAIFeaturestore() *schema.Resource { Computed: true, Description: `The timestamp of when the featurestore was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, Description: `Used to perform consistent read-modify-write updates.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -159,12 +182,6 @@ func resourceVertexAIFeaturestoreCreate(d *schema.ResourceData, meta interface{} } obj := make(map[string]interface{}) - labelsProp, err := expandVertexAIFeaturestoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } onlineServingConfigProp, err := expandVertexAIFeaturestoreOnlineServingConfig(d.Get("online_serving_config"), d, config) if err != nil { return err @@ -177,6 +194,12 @@ func resourceVertexAIFeaturestoreCreate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("encryption_spec"); !tpgresource.IsEmptyValue(reflect.ValueOf(encryptionSpecProp)) && (ok || !reflect.DeepEqual(v, encryptionSpecProp)) { obj["encryptionSpec"] = encryptionSpecProp } + labelsProp, err := expandVertexAIFeaturestoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{VertexAIBasePath}}projects/{{project}}/locations/{{region}}/featurestores?featurestoreId={{name}}") if err != nil { @@ -303,6 +326,12 @@ func resourceVertexAIFeaturestoreRead(d *schema.ResourceData, meta interface{}) if err := d.Set("encryption_spec", flattenVertexAIFeaturestoreEncryptionSpec(res["encryptionSpec"], d, config)); err != nil { return fmt.Errorf("Error reading Featurestore: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIFeaturestoreTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Featurestore: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIFeaturestoreEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Featurestore: %s", err) + } return nil } @@ -323,12 +352,6 @@ func resourceVertexAIFeaturestoreUpdate(d *schema.ResourceData, meta interface{} billingProject = project obj := make(map[string]interface{}) - labelsProp, err := expandVertexAIFeaturestoreLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } onlineServingConfigProp, err := expandVertexAIFeaturestoreOnlineServingConfig(d.Get("online_serving_config"), d, config) if err != nil { return err @@ -341,6 +364,12 @@ func resourceVertexAIFeaturestoreUpdate(d *schema.ResourceData, meta interface{} } else if v, ok := d.GetOkExists("encryption_spec"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, encryptionSpecProp)) { obj["encryptionSpec"] = encryptionSpecProp } + labelsProp, err := expandVertexAIFeaturestoreEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{VertexAIBasePath}}projects/{{project}}/locations/{{region}}/featurestores/{{name}}") if err != nil { @@ -350,10 +379,6 @@ func resourceVertexAIFeaturestoreUpdate(d *schema.ResourceData, meta interface{} log.Printf("[DEBUG] Updating Featurestore %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("online_serving_config") { updateMask = append(updateMask, "onlineServingConfig") } @@ -361,6 +386,10 @@ func resourceVertexAIFeaturestoreUpdate(d *schema.ResourceData, meta interface{} if d.HasChange("encryption_spec") { updateMask = append(updateMask, "encryptionSpec") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -463,10 +492,10 @@ func resourceVertexAIFeaturestoreDelete(d *schema.ResourceData, meta interface{} func resourceVertexAIFeaturestoreImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/featurestores/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/featurestores/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -495,7 +524,18 @@ func flattenVertexAIFeaturestoreUpdateTime(v interface{}, d *schema.ResourceData } func flattenVertexAIFeaturestoreLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIFeaturestoreOnlineServingConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -596,15 +636,23 @@ func flattenVertexAIFeaturestoreEncryptionSpecKmsKeyName(v interface{}, d *schem return v } -func expandVertexAIFeaturestoreLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenVertexAIFeaturestoreTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenVertexAIFeaturestoreEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandVertexAIFeaturestoreOnlineServingConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -693,3 +741,14 @@ func expandVertexAIFeaturestoreEncryptionSpec(v interface{}, d tpgresource.Terra func expandVertexAIFeaturestoreEncryptionSpecKmsKeyName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } + +func expandVertexAIFeaturestoreEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} diff --git a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype.go b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype.go index 601508aaeea..6be4e439062 100644 --- a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype.go +++ b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceVertexAIFeaturestoreEntitytype() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "featurestore": { Type: schema.TypeString, @@ -61,10 +66,14 @@ func ResourceVertexAIFeaturestoreEntitytype() *schema.Resource { Description: `Optional. Description of the EntityType.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to this EntityType.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this EntityType. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "monitoring_config": { Type: schema.TypeList, @@ -174,11 +183,24 @@ If both FeaturestoreMonitoringConfig.SnapshotAnalysis.monitoring_interval_days a Computed: true, Description: `The timestamp of when the featurestore was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, Description: `Used to perform consistent read-modify-write updates.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -209,18 +231,18 @@ func resourceVertexAIFeaturestoreEntitytypeCreate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAIFeaturestoreEntitytypeLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } monitoringConfigProp, err := expandVertexAIFeaturestoreEntitytypeMonitoringConfig(d.Get("monitoring_config"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("monitoring_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(monitoringConfigProp)) && (ok || !reflect.DeepEqual(v, monitoringConfigProp)) { obj["monitoringConfig"] = monitoringConfigProp } + labelsProp, err := expandVertexAIFeaturestoreEntitytypeEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceVertexAIFeaturestoreEntitytypeEncoder(d, meta, obj) if err != nil { @@ -339,6 +361,12 @@ func resourceVertexAIFeaturestoreEntitytypeRead(d *schema.ResourceData, meta int if err := d.Set("monitoring_config", flattenVertexAIFeaturestoreEntitytypeMonitoringConfig(res["monitoringConfig"], d, config)); err != nil { return fmt.Errorf("Error reading FeaturestoreEntitytype: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIFeaturestoreEntitytypeTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading FeaturestoreEntitytype: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIFeaturestoreEntitytypeEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading FeaturestoreEntitytype: %s", err) + } return nil } @@ -359,18 +387,18 @@ func resourceVertexAIFeaturestoreEntitytypeUpdate(d *schema.ResourceData, meta i } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAIFeaturestoreEntitytypeLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } monitoringConfigProp, err := expandVertexAIFeaturestoreEntitytypeMonitoringConfig(d.Get("monitoring_config"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("monitoring_config"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, monitoringConfigProp)) { obj["monitoringConfig"] = monitoringConfigProp } + labelsProp, err := expandVertexAIFeaturestoreEntitytypeEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceVertexAIFeaturestoreEntitytypeEncoder(d, meta, obj) if err != nil { @@ -389,13 +417,13 @@ func resourceVertexAIFeaturestoreEntitytypeUpdate(d *schema.ResourceData, meta i updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("monitoring_config") { updateMask = append(updateMask, "monitoringConfig") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -522,7 +550,18 @@ func flattenVertexAIFeaturestoreEntitytypeUpdateTime(v interface{}, d *schema.Re } func flattenVertexAIFeaturestoreEntitytypeLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIFeaturestoreEntitytypeMonitoringConfig(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -656,19 +695,27 @@ func flattenVertexAIFeaturestoreEntitytypeMonitoringConfigCategoricalThresholdCo return v } -func expandVertexAIFeaturestoreEntitytypeDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandVertexAIFeaturestoreEntitytypeLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenVertexAIFeaturestoreEntitytypeTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenVertexAIFeaturestoreEntitytypeEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + +func expandVertexAIFeaturestoreEntitytypeDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil } func expandVertexAIFeaturestoreEntitytypeMonitoringConfig(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -836,6 +883,17 @@ func expandVertexAIFeaturestoreEntitytypeMonitoringConfigCategoricalThresholdCon return v, nil } +func expandVertexAIFeaturestoreEntitytypeEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceVertexAIFeaturestoreEntitytypeEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { if v, ok := d.GetOk("featurestore"); ok { re := regexp.MustCompile("projects/(.+)/locations/(.+)/featurestores/(.+)$") diff --git a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go index 05e7f6c4905..af8ace194fb 100644 --- a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go +++ b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -48,6 +49,10 @@ func ResourceVertexAIFeaturestoreEntitytypeFeature() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + ), + Schema: map[string]*schema.Schema{ "entitytype": { Type: schema.TypeString, @@ -67,10 +72,14 @@ func ResourceVertexAIFeaturestoreEntitytypeFeature() *schema.Resource { Description: `Description of the feature.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to the feature.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to the feature. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "name": { Type: schema.TypeString, @@ -83,11 +92,24 @@ func ResourceVertexAIFeaturestoreEntitytypeFeature() *schema.Resource { Computed: true, Description: `The timestamp of when the entity type was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, Description: `Used to perform consistent read-modify-write updates.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -112,12 +134,6 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureCreate(d *schema.ResourceData, } obj := make(map[string]interface{}) - labelsProp, err := expandVertexAIFeaturestoreEntitytypeFeatureLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandVertexAIFeaturestoreEntitytypeFeatureDescription(d.Get("description"), d, config) if err != nil { return err @@ -130,6 +146,12 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureCreate(d *schema.ResourceData, } else if v, ok := d.GetOkExists("value_type"); !tpgresource.IsEmptyValue(reflect.ValueOf(valueTypeProp)) && (ok || !reflect.DeepEqual(v, valueTypeProp)) { obj["valueType"] = valueTypeProp } + labelsProp, err := expandVertexAIFeaturestoreEntitytypeFeatureEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceVertexAIFeaturestoreEntitytypeFeatureEncoder(d, meta, obj) if err != nil { @@ -248,6 +270,12 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureRead(d *schema.ResourceData, m if err := d.Set("value_type", flattenVertexAIFeaturestoreEntitytypeFeatureValueType(res["valueType"], d, config)); err != nil { return fmt.Errorf("Error reading FeaturestoreEntitytypeFeature: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIFeaturestoreEntitytypeFeatureTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading FeaturestoreEntitytypeFeature: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIFeaturestoreEntitytypeFeatureEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading FeaturestoreEntitytypeFeature: %s", err) + } return nil } @@ -262,18 +290,18 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureUpdate(d *schema.ResourceData, billingProject := "" obj := make(map[string]interface{}) - labelsProp, err := expandVertexAIFeaturestoreEntitytypeFeatureLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } descriptionProp, err := expandVertexAIFeaturestoreEntitytypeFeatureDescription(d.Get("description"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } + labelsProp, err := expandVertexAIFeaturestoreEntitytypeFeatureEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceVertexAIFeaturestoreEntitytypeFeatureEncoder(d, meta, obj) if err != nil { @@ -288,13 +316,13 @@ func resourceVertexAIFeaturestoreEntitytypeFeatureUpdate(d *schema.ResourceData, log.Printf("[DEBUG] Updating FeaturestoreEntitytypeFeature %q: %#v", d.Id(), obj) updateMask := []string{} - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("description") { updateMask = append(updateMask, "description") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -417,7 +445,18 @@ func flattenVertexAIFeaturestoreEntitytypeFeatureUpdateTime(v interface{}, d *sc } func flattenVertexAIFeaturestoreEntitytypeFeatureLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIFeaturestoreEntitytypeFeatureDescription(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -428,15 +467,23 @@ func flattenVertexAIFeaturestoreEntitytypeFeatureValueType(v interface{}, d *sch return v } -func expandVertexAIFeaturestoreEntitytypeFeatureLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func flattenVertexAIFeaturestoreEntitytypeFeatureTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { if v == nil { - return map[string]string{}, nil + return v } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } } - return m, nil + + return transformed +} + +func flattenVertexAIFeaturestoreEntitytypeFeatureEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v } func expandVertexAIFeaturestoreEntitytypeFeatureDescription(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { @@ -447,6 +494,17 @@ func expandVertexAIFeaturestoreEntitytypeFeatureValueType(v interface{}, d tpgre return v, nil } +func expandVertexAIFeaturestoreEntitytypeFeatureEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceVertexAIFeaturestoreEntitytypeFeatureEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { if v, ok := d.GetOk("entitytype"); ok { re := regexp.MustCompile("^projects/(.+)/locations/(.+)/featurestores/(.+)/entityTypes/(.+)$") diff --git a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature_generated_test.go b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature_generated_test.go index fccf816416b..f42f8890925 100644 --- a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_feature_generated_test.go @@ -49,7 +49,7 @@ func TestAccVertexAIFeaturestoreEntitytypeFeature_vertexAiFeaturestoreEntitytype ResourceName: "google_vertex_ai_featurestore_entitytype_feature.feature", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "etag", "entitytype"}, + ImportStateVerifyIgnore: []string{"name", "etag", "entitytype", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_generated_test.go b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_generated_test.go index e267402092f..b2ba3ebb352 100644 --- a/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_featurestore_entitytype_generated_test.go @@ -53,7 +53,7 @@ func TestAccVertexAIFeaturestoreEntitytype_vertexAiFeaturestoreEntitytypeExample ResourceName: "google_vertex_ai_featurestore_entitytype.entity", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "etag", "featurestore"}, + ImportStateVerifyIgnore: []string{"name", "etag", "featurestore", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_featurestore_generated_test.go b/google/services/vertexai/resource_vertex_ai_featurestore_generated_test.go index df5c0d7dfc8..b87d58ae6aa 100644 --- a/google/services/vertexai/resource_vertex_ai_featurestore_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_featurestore_generated_test.go @@ -53,7 +53,7 @@ func TestAccVertexAIFeaturestore_vertexAiFeaturestoreExample(t *testing.T) { ResourceName: "google_vertex_ai_featurestore.featurestore", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "etag", "region", "force_destroy"}, + ImportStateVerifyIgnore: []string{"name", "etag", "region", "force_destroy", "labels", "terraform_labels"}, }, }, }) @@ -100,7 +100,7 @@ func TestAccVertexAIFeaturestore_vertexAiFeaturestoreScalingExample(t *testing.T ResourceName: "google_vertex_ai_featurestore.featurestore", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"name", "etag", "region", "force_destroy"}, + ImportStateVerifyIgnore: []string{"name", "etag", "region", "force_destroy", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_index.go b/google/services/vertexai/resource_vertex_ai_index.go index 5ab6caed388..fe4c445682a 100644 --- a/google/services/vertexai/resource_vertex_ai_index.go +++ b/google/services/vertexai/resource_vertex_ai_index.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceVertexAIIndex() *schema.Resource { Delete: schema.DefaultTimeout(180 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -68,10 +74,13 @@ func ResourceVertexAIIndex() *schema.Resource { Default: "BATCH_UPDATE", }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels with user-defined metadata to organize your Indexes.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `The labels with user-defined metadata to organize your Indexes. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "metadata": { Type: schema.TypeList, @@ -229,6 +238,12 @@ then existing content of the Index will be replaced by the data from the content }, }, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -263,6 +278,13 @@ then existing content of the Index will be replaced by the data from the content Computed: true, Description: `The resource name of the Index.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -305,18 +327,18 @@ func resourceVertexAIIndexCreate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("metadata"); !tpgresource.IsEmptyValue(reflect.ValueOf(metadataProp)) && (ok || !reflect.DeepEqual(v, metadataProp)) { obj["metadata"] = metadataProp } - labelsProp, err := expandVertexAIIndexLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } indexUpdateMethodProp, err := expandVertexAIIndexIndexUpdateMethod(d.Get("index_update_method"), d, config) if err != nil { return err } else if v, ok := d.GetOkExists("index_update_method"); !tpgresource.IsEmptyValue(reflect.ValueOf(indexUpdateMethodProp)) && (ok || !reflect.DeepEqual(v, indexUpdateMethodProp)) { obj["indexUpdateMethod"] = indexUpdateMethodProp } + labelsProp, err := expandVertexAIIndexEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{VertexAIBasePath}}projects/{{project}}/locations/{{region}}/indexes") if err != nil { @@ -459,6 +481,12 @@ func resourceVertexAIIndexRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("index_update_method", flattenVertexAIIndexIndexUpdateMethod(res["indexUpdateMethod"], d, config)); err != nil { return fmt.Errorf("Error reading Index: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIIndexTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Index: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIIndexEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Index: %s", err) + } return nil } @@ -497,10 +525,10 @@ func resourceVertexAIIndexUpdate(d *schema.ResourceData, meta interface{}) error } else if v, ok := d.GetOkExists("metadata"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, metadataProp)) { obj["metadata"] = metadataProp } - labelsProp, err := expandVertexAIIndexLabels(d.Get("labels"), d, config) + labelsProp, err := expandVertexAIIndexEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -524,7 +552,7 @@ func resourceVertexAIIndexUpdate(d *schema.ResourceData, meta interface{}) error updateMask = append(updateMask, "metadata") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -644,10 +672,10 @@ func resourceVertexAIIndexDelete(d *schema.ResourceData, meta interface{}) error func resourceVertexAIIndexImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/indexes/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/indexes/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -877,7 +905,18 @@ func flattenVertexAIIndexDeployedIndexesDeployedIndexId(v interface{}, d *schema } func flattenVertexAIIndexLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIIndexCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -928,6 +967,25 @@ func flattenVertexAIIndexIndexUpdateMethod(v interface{}, d *schema.ResourceData return v } +func flattenVertexAIIndexTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenVertexAIIndexEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandVertexAIIndexDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -1126,7 +1184,11 @@ func expandVertexAIIndexMetadataConfigAlgorithmConfigBruteForceConfig(v interfac return transformed, nil } -func expandVertexAIIndexLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandVertexAIIndexIndexUpdateMethod(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandVertexAIIndexEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -1136,7 +1198,3 @@ func expandVertexAIIndexLabels(v interface{}, d tpgresource.TerraformResourceDat } return m, nil } - -func expandVertexAIIndexIndexUpdateMethod(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/vertexai/resource_vertex_ai_index_endpoint.go b/google/services/vertexai/resource_vertex_ai_index_endpoint.go index fcbeb7a10fd..5781966669c 100644 --- a/google/services/vertexai/resource_vertex_ai_index_endpoint.go +++ b/google/services/vertexai/resource_vertex_ai_index_endpoint.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceVertexAIIndexEndpoint() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -59,10 +65,13 @@ func ResourceVertexAIIndexEndpoint() *schema.Resource { Description: `The description of the Index.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels with user-defined metadata to organize your Indexes.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `The labels with user-defined metadata to organize your Indexes. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "network": { Type: schema.TypeString, @@ -90,6 +99,12 @@ Where '{project}' is a project number, as in '12345', and '{network}' is network Computed: true, Description: `The timestamp of when the Index was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "etag": { Type: schema.TypeString, Computed: true, @@ -105,6 +120,13 @@ Where '{project}' is a project number, as in '12345', and '{network}' is network Computed: true, Description: `If publicEndpointEnabled is true, this field will be populated with the domain name to use for this index endpoint.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -141,12 +163,6 @@ func resourceVertexAIIndexEndpointCreate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAIIndexEndpointLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } networkProp, err := expandVertexAIIndexEndpointNetwork(d.Get("network"), d, config) if err != nil { return err @@ -159,6 +175,12 @@ func resourceVertexAIIndexEndpointCreate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("public_endpoint_enabled"); !tpgresource.IsEmptyValue(reflect.ValueOf(publicEndpointEnabledProp)) && (ok || !reflect.DeepEqual(v, publicEndpointEnabledProp)) { obj["publicEndpointEnabled"] = publicEndpointEnabledProp } + labelsProp, err := expandVertexAIIndexEndpointEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } url, err := tpgresource.ReplaceVars(d, config, "{{VertexAIBasePath}}projects/{{project}}/locations/{{region}}/indexEndpoints") if err != nil { @@ -292,6 +314,12 @@ func resourceVertexAIIndexEndpointRead(d *schema.ResourceData, meta interface{}) if err := d.Set("public_endpoint_domain_name", flattenVertexAIIndexEndpointPublicEndpointDomainName(res["publicEndpointDomainName"], d, config)); err != nil { return fmt.Errorf("Error reading IndexEndpoint: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAIIndexEndpointTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading IndexEndpoint: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAIIndexEndpointEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading IndexEndpoint: %s", err) + } return nil } @@ -324,10 +352,10 @@ func resourceVertexAIIndexEndpointUpdate(d *schema.ResourceData, meta interface{ } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAIIndexEndpointLabels(d.Get("labels"), d, config) + labelsProp, err := expandVertexAIIndexEndpointEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -347,7 +375,7 @@ func resourceVertexAIIndexEndpointUpdate(d *schema.ResourceData, meta interface{ updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -437,10 +465,10 @@ func resourceVertexAIIndexEndpointDelete(d *schema.ResourceData, meta interface{ func resourceVertexAIIndexEndpointImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/indexEndpoints/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/indexEndpoints/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } @@ -471,7 +499,18 @@ func flattenVertexAIIndexEndpointDescription(v interface{}, d *schema.ResourceDa } func flattenVertexAIIndexEndpointLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenVertexAIIndexEndpointCreateTime(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -490,6 +529,25 @@ func flattenVertexAIIndexEndpointPublicEndpointDomainName(v interface{}, d *sche return v } +func flattenVertexAIIndexEndpointTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenVertexAIIndexEndpointEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandVertexAIIndexEndpointDisplayName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -498,7 +556,15 @@ func expandVertexAIIndexEndpointDescription(v interface{}, d tpgresource.Terrafo return v, nil } -func expandVertexAIIndexEndpointLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandVertexAIIndexEndpointNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandVertexAIIndexEndpointPublicEndpointEnabled(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { + return v, nil +} + +func expandVertexAIIndexEndpointEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } @@ -508,11 +574,3 @@ func expandVertexAIIndexEndpointLabels(v interface{}, d tpgresource.TerraformRes } return m, nil } - -func expandVertexAIIndexEndpointNetwork(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} - -func expandVertexAIIndexEndpointPublicEndpointEnabled(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { - return v, nil -} diff --git a/google/services/vertexai/resource_vertex_ai_index_endpoint_generated_test.go b/google/services/vertexai/resource_vertex_ai_index_endpoint_generated_test.go index 15b68dab431..1b5c23033e4 100644 --- a/google/services/vertexai/resource_vertex_ai_index_endpoint_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_index_endpoint_generated_test.go @@ -34,7 +34,6 @@ func TestAccVertexAIIndexEndpoint_vertexAiIndexEndpointExample(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "vertex-ai-index-endpoint"), "random_suffix": acctest.RandString(t, 10), } @@ -50,7 +49,7 @@ func TestAccVertexAIIndexEndpoint_vertexAiIndexEndpointExample(t *testing.T) { ResourceName: "google_vertex_ai_index_endpoint.index_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "public_endpoint_enabled", "region"}, + ImportStateVerifyIgnore: []string{"etag", "public_endpoint_enabled", "region", "labels", "terraform_labels"}, }, }, }) @@ -65,14 +64,14 @@ resource "google_vertex_ai_index_endpoint" "index_endpoint" { labels = { label-one = "value-one" } - network = "projects/${data.google_project.project.number}/global/networks/${data.google_compute_network.vertex_network.name}" + network = "projects/${data.google_project.project.number}/global/networks/${google_compute_network.vertex_network.name}" depends_on = [ google_service_networking_connection.vertex_vpc_connection ] } resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id + network = google_compute_network.vertex_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.vertex_range.name] } @@ -82,11 +81,11 @@ resource "google_compute_global_address" "vertex_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 24 - network = data.google_compute_network.vertex_network.id + network = google_compute_network.vertex_network.id } -data "google_compute_network" "vertex_network" { - name = "%{network_name}" +resource "google_compute_network" "vertex_network" { + name = "tf-test-network-name%{random_suffix}" } data "google_project" "project" {} @@ -113,7 +112,7 @@ func TestAccVertexAIIndexEndpoint_vertexAiIndexEndpointWithPublicEndpointExample ResourceName: "google_vertex_ai_index_endpoint.index_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "public_endpoint_enabled", "region"}, + ImportStateVerifyIgnore: []string{"etag", "public_endpoint_enabled", "region", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_index_endpoint_test.go b/google/services/vertexai/resource_vertex_ai_index_endpoint_test.go index c39cc06f10f..7aa5942160a 100644 --- a/google/services/vertexai/resource_vertex_ai_index_endpoint_test.go +++ b/google/services/vertexai/resource_vertex_ai_index_endpoint_test.go @@ -14,7 +14,7 @@ func TestAccVertexAIIndexEndpoint_updated(t *testing.T) { t.Parallel() context := map[string]interface{}{ - "network_name": acctest.BootstrapSharedTestNetwork(t, "vertex-ai-index-endpoint-update"), + "network_name": acctest.BootstrapSharedServiceNetworkingConnection(t, "vertex-ai-index-endpoint-update-1"), "random_suffix": acctest.RandString(t, 10), } @@ -30,7 +30,7 @@ func TestAccVertexAIIndexEndpoint_updated(t *testing.T) { ResourceName: "google_vertex_ai_index_endpoint.index_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "region"}, + ImportStateVerifyIgnore: []string{"etag", "region", "labels", "terraform_labels"}, }, { Config: testAccVertexAIIndexEndpoint_updated(context), @@ -39,7 +39,7 @@ func TestAccVertexAIIndexEndpoint_updated(t *testing.T) { ResourceName: "google_vertex_ai_index_endpoint.index_endpoint", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "region"}, + ImportStateVerifyIgnore: []string{"etag", "region", "labels", "terraform_labels"}, }, }, }) @@ -55,15 +55,8 @@ resource "google_vertex_ai_index_endpoint" "index_endpoint" { label-one = "value-one" } network = "projects/${data.google_project.project.number}/global/networks/${data.google_compute_network.vertex_network.name}" - depends_on = [ - google_service_networking_connection.vertex_vpc_connection - ] -} -resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.vertex_range.name] } + resource "google_compute_global_address" "vertex_range" { name = "tf-test-address-name%{random_suffix}" purpose = "VPC_PEERING" @@ -89,22 +82,8 @@ resource "google_vertex_ai_index_endpoint" "index_endpoint" { label-two = "value-two" } network = "projects/${data.google_project.project.number}/global/networks/${data.google_compute_network.vertex_network.name}" - depends_on = [ - google_service_networking_connection.vertex_vpc_connection - ] -} -resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id - service = "servicenetworking.googleapis.com" - reserved_peering_ranges = [google_compute_global_address.vertex_range.name] -} -resource "google_compute_global_address" "vertex_range" { - name = "tf-test-address-name%{random_suffix}" - purpose = "VPC_PEERING" - address_type = "INTERNAL" - prefix_length = 24 - network = data.google_compute_network.vertex_network.id } + data "google_compute_network" "vertex_network" { name = "%{network_name}" } diff --git a/google/services/vertexai/resource_vertex_ai_index_generated_test.go b/google/services/vertexai/resource_vertex_ai_index_generated_test.go index 91c5cb56e28..df64d86b100 100644 --- a/google/services/vertexai/resource_vertex_ai_index_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_index_generated_test.go @@ -51,7 +51,7 @@ func TestAccVertexAIIndex_vertexAiIndexExample(t *testing.T) { ResourceName: "google_vertex_ai_index.index", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite"}, + ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite", "labels", "terraform_labels"}, }, }, }) @@ -123,7 +123,7 @@ func TestAccVertexAIIndex_vertexAiIndexStreamingExample(t *testing.T) { ResourceName: "google_vertex_ai_index.index", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite"}, + ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_index_test.go b/google/services/vertexai/resource_vertex_ai_index_test.go index 4fc60d0a9bf..7cdcfe5bd13 100644 --- a/google/services/vertexai/resource_vertex_ai_index_test.go +++ b/google/services/vertexai/resource_vertex_ai_index_test.go @@ -36,7 +36,7 @@ func TestAccVertexAIIndex_updated(t *testing.T) { ResourceName: "google_vertex_ai_index.index", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite"}, + ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite", "labels", "terraform_labels"}, }, { Config: testAccVertexAIIndex_updated(context), @@ -45,7 +45,7 @@ func TestAccVertexAIIndex_updated(t *testing.T) { ResourceName: "google_vertex_ai_index.index", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite"}, + ImportStateVerifyIgnore: []string{"etag", "region", "metadata.0.contents_delta_uri", "metadata.0.is_complete_overwrite", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_tensorboard.go b/google/services/vertexai/resource_vertex_ai_tensorboard.go index dba3bc96094..fb58ff237d5 100644 --- a/google/services/vertexai/resource_vertex_ai_tensorboard.go +++ b/google/services/vertexai/resource_vertex_ai_tensorboard.go @@ -24,6 +24,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-google/google/tpgresource" @@ -47,6 +48,11 @@ func ResourceVertexAITensorboard() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "display_name": { Type: schema.TypeString, @@ -77,10 +83,14 @@ Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/ }, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `The labels with user-defined metadata to organize your Tensorboards.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `The labels with user-defined metadata to organize your Tensorboards. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "region": { Type: schema.TypeString, @@ -99,6 +109,12 @@ Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/ Computed: true, Description: `The timestamp of when the Tensorboard was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "name": { Type: schema.TypeString, Computed: true, @@ -109,6 +125,13 @@ Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/ Computed: true, Description: `The number of Runs stored in this Tensorboard.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -151,10 +174,10 @@ func resourceVertexAITensorboardCreate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("encryption_spec"); !tpgresource.IsEmptyValue(reflect.ValueOf(encryptionSpecProp)) && (ok || !reflect.DeepEqual(v, encryptionSpecProp)) { obj["encryptionSpec"] = encryptionSpecProp } - labelsProp, err := expandVertexAITensorboardLabels(d.Get("labels"), d, config) + labelsProp, err := expandVertexAITensorboardEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -293,6 +316,12 @@ func resourceVertexAITensorboardRead(d *schema.ResourceData, meta interface{}) e if err := d.Set("labels", flattenVertexAITensorboardLabels(res["labels"], d, config)); err != nil { return fmt.Errorf("Error reading Tensorboard: %s", err) } + if err := d.Set("terraform_labels", flattenVertexAITensorboardTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Tensorboard: %s", err) + } + if err := d.Set("effective_labels", flattenVertexAITensorboardEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Tensorboard: %s", err) + } return nil } @@ -325,10 +354,10 @@ func resourceVertexAITensorboardUpdate(d *schema.ResourceData, meta interface{}) } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandVertexAITensorboardLabels(d.Get("labels"), d, config) + labelsProp, err := expandVertexAITensorboardEffectiveLabels(d.Get("effective_labels"), d, config) if err != nil { return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { obj["labels"] = labelsProp } @@ -348,7 +377,7 @@ func resourceVertexAITensorboardUpdate(d *schema.ResourceData, meta interface{}) updateMask = append(updateMask, "description") } - if d.HasChange("labels") { + if d.HasChange("effective_labels") { updateMask = append(updateMask, "labels") } // updateMask is a URL parameter but not present in the schema, so ReplaceVars @@ -514,6 +543,36 @@ func flattenVertexAITensorboardUpdateTime(v interface{}, d *schema.ResourceData, } func flattenVertexAITensorboardLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenVertexAITensorboardTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenVertexAITensorboardEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { return v } @@ -548,7 +607,7 @@ func expandVertexAITensorboardEncryptionSpecKmsKeyName(v interface{}, d tpgresou return v, nil } -func expandVertexAITensorboardLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { +func expandVertexAITensorboardEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { if v == nil { return map[string]string{}, nil } diff --git a/google/services/vertexai/resource_vertex_ai_tensorboard_generated_test.go b/google/services/vertexai/resource_vertex_ai_tensorboard_generated_test.go index 7075ee64a71..7ad39a9097c 100644 --- a/google/services/vertexai/resource_vertex_ai_tensorboard_generated_test.go +++ b/google/services/vertexai/resource_vertex_ai_tensorboard_generated_test.go @@ -49,7 +49,7 @@ func TestAccVertexAITensorboard_vertexAiTensorboardExample(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, }, }) @@ -89,7 +89,7 @@ func TestAccVertexAITensorboard_vertexAiTensorboardFullExample(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vertexai/resource_vertex_ai_tensorboard_test.go b/google/services/vertexai/resource_vertex_ai_tensorboard_test.go index 4ad33e80f8a..5648d9074b4 100644 --- a/google/services/vertexai/resource_vertex_ai_tensorboard_test.go +++ b/google/services/vertexai/resource_vertex_ai_tensorboard_test.go @@ -27,7 +27,7 @@ func TestAccVertexAITensorboard_Update(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, { Config: testAccVertexAITensorboard_Update(random_suffix+"new", random_suffix, random_suffix, random_suffix), @@ -36,7 +36,7 @@ func TestAccVertexAITensorboard_Update(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, { Config: testAccVertexAITensorboard_Update(random_suffix+"new", random_suffix+"new", random_suffix, random_suffix), @@ -45,7 +45,7 @@ func TestAccVertexAITensorboard_Update(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, { Config: testAccVertexAITensorboard_Update(random_suffix+"new", random_suffix+"new", random_suffix+"new", random_suffix), @@ -54,7 +54,7 @@ func TestAccVertexAITensorboard_Update(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, { Config: testAccVertexAITensorboard_Update(random_suffix+"new", random_suffix+"new", random_suffix+"new", random_suffix+"new"), @@ -63,7 +63,7 @@ func TestAccVertexAITensorboard_Update(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, { Config: testAccVertexAITensorboard_Update(random_suffix, random_suffix, random_suffix, random_suffix), @@ -72,7 +72,7 @@ func TestAccVertexAITensorboard_Update(t *testing.T) { ResourceName: "google_vertex_ai_tensorboard.tensorboard", ImportState: true, ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"region", "project"}, + ImportStateVerifyIgnore: []string{"region", "project", "labels", "terraform_labels"}, }, }, }) diff --git a/google/services/vpcaccess/data_source_vpc_access_connector.go b/google/services/vpcaccess/data_source_vpc_access_connector.go index 4898cf21cd0..bdb5658e1ac 100644 --- a/google/services/vpcaccess/data_source_vpc_access_connector.go +++ b/google/services/vpcaccess/data_source_vpc_access_connector.go @@ -32,5 +32,13 @@ func dataSourceVPCAccessConnectorRead(d *schema.ResourceData, meta interface{}) d.SetId(id) - return resourceVPCAccessConnectorRead(d, meta) + err = resourceVPCAccessConnectorRead(d, meta) + if err != nil { + return err + } + + if d.Id() == "" { + return fmt.Errorf("%s not found", id) + } + return nil } diff --git a/google/services/vpcaccess/resource_vpc_access_connector.go b/google/services/vpcaccess/resource_vpc_access_connector.go index 1aa513bf1e6..08e8b1959f1 100644 --- a/google/services/vpcaccess/resource_vpc_access_connector.go +++ b/google/services/vpcaccess/resource_vpc_access_connector.go @@ -23,6 +23,7 @@ import ( "reflect" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -45,6 +46,10 @@ func ResourceVPCAccessConnector() *schema.Resource { Delete: schema.DefaultTimeout(20 * time.Minute), }, + CustomizeDiff: customdiff.All( + tpgresource.DefaultProviderProject, + ), + Schema: map[string]*schema.Schema{ "name": { Type: schema.TypeString, @@ -461,10 +466,10 @@ func resourceVPCAccessConnectorDelete(d *schema.ResourceData, meta interface{}) func resourceVPCAccessConnectorImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { config := meta.(*transport_tpg.Config) if err := tpgresource.ParseImportId([]string{ - "projects/(?P[^/]+)/locations/(?P[^/]+)/connectors/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)/(?P[^/]+)", - "(?P[^/]+)", + "^projects/(?P[^/]+)/locations/(?P[^/]+)/connectors/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)/(?P[^/]+)$", + "^(?P[^/]+)$", }, d, config); err != nil { return nil, err } diff --git a/google/services/workflows/resource_workflows_workflow.go b/google/services/workflows/resource_workflows_workflow.go index f6da1e0273c..6e826655744 100644 --- a/google/services/workflows/resource_workflows_workflow.go +++ b/google/services/workflows/resource_workflows_workflow.go @@ -25,6 +25,7 @@ import ( "strings" "time" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -54,6 +55,10 @@ func ResourceWorkflowsWorkflow() *schema.Resource { Version: 0, }, }, + CustomizeDiff: customdiff.All( + tpgresource.SetLabelsDiff, + tpgresource.DefaultProviderProject, + ), Schema: map[string]*schema.Schema{ "crypto_key_name": { @@ -70,10 +75,14 @@ Format: projects/{project}/locations/{location}/keyRings/{keyRing}/cryptoKeys/{c Description: `Description of the workflow provided by the user. Must be at most 1000 unicode characters long.`, }, "labels": { - Type: schema.TypeMap, - Optional: true, - Description: `A set of key/value label pairs to assign to this Workflow.`, - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeMap, + Optional: true, + Description: `A set of key/value label pairs to assign to this Workflow. + + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource.`, + Elem: &schema.Schema{Type: schema.TypeString}, }, "name": { Type: schema.TypeString, @@ -111,6 +120,12 @@ Modifying this field for an existing workflow results in a new workflow revision Computed: true, Description: `The timestamp of when the workflow was created in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits.`, }, + "effective_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "revision_id": { Type: schema.TypeString, Computed: true, @@ -121,6 +136,13 @@ Modifying this field for an existing workflow results in a new workflow revision Computed: true, Description: `State of the workflow deployment.`, }, + "terraform_labels": { + Type: schema.TypeMap, + Computed: true, + Description: `The combination of labels configured directly on the resource + and default labels configured on the provider.`, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "update_time": { Type: schema.TypeString, Computed: true, @@ -164,12 +186,6 @@ func resourceWorkflowsWorkflowCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(descriptionProp)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandWorkflowsWorkflowLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } serviceAccountProp, err := expandWorkflowsWorkflowServiceAccount(d.Get("service_account"), d, config) if err != nil { return err @@ -188,6 +204,12 @@ func resourceWorkflowsWorkflowCreate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("crypto_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(cryptoKeyNameProp)) && (ok || !reflect.DeepEqual(v, cryptoKeyNameProp)) { obj["cryptoKeyName"] = cryptoKeyNameProp } + labelsProp, err := expandWorkflowsWorkflowEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(labelsProp)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceWorkflowsWorkflowEncoder(d, meta, obj) if err != nil { @@ -332,6 +354,12 @@ func resourceWorkflowsWorkflowRead(d *schema.ResourceData, meta interface{}) err if err := d.Set("crypto_key_name", flattenWorkflowsWorkflowCryptoKeyName(res["cryptoKeyName"], d, config)); err != nil { return fmt.Errorf("Error reading Workflow: %s", err) } + if err := d.Set("terraform_labels", flattenWorkflowsWorkflowTerraformLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Workflow: %s", err) + } + if err := d.Set("effective_labels", flattenWorkflowsWorkflowEffectiveLabels(res["labels"], d, config)); err != nil { + return fmt.Errorf("Error reading Workflow: %s", err) + } return nil } @@ -358,12 +386,6 @@ func resourceWorkflowsWorkflowUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("description"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, descriptionProp)) { obj["description"] = descriptionProp } - labelsProp, err := expandWorkflowsWorkflowLabels(d.Get("labels"), d, config) - if err != nil { - return err - } else if v, ok := d.GetOkExists("labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { - obj["labels"] = labelsProp - } serviceAccountProp, err := expandWorkflowsWorkflowServiceAccount(d.Get("service_account"), d, config) if err != nil { return err @@ -382,6 +404,12 @@ func resourceWorkflowsWorkflowUpdate(d *schema.ResourceData, meta interface{}) e } else if v, ok := d.GetOkExists("crypto_key_name"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, cryptoKeyNameProp)) { obj["cryptoKeyName"] = cryptoKeyNameProp } + labelsProp, err := expandWorkflowsWorkflowEffectiveLabels(d.Get("effective_labels"), d, config) + if err != nil { + return err + } else if v, ok := d.GetOkExists("effective_labels"); !tpgresource.IsEmptyValue(reflect.ValueOf(v)) && (ok || !reflect.DeepEqual(v, labelsProp)) { + obj["labels"] = labelsProp + } obj, err = resourceWorkflowsWorkflowEncoder(d, meta, obj) if err != nil { @@ -400,10 +428,6 @@ func resourceWorkflowsWorkflowUpdate(d *schema.ResourceData, meta interface{}) e updateMask = append(updateMask, "description") } - if d.HasChange("labels") { - updateMask = append(updateMask, "labels") - } - if d.HasChange("service_account") { updateMask = append(updateMask, "serviceAccount") } @@ -415,6 +439,10 @@ func resourceWorkflowsWorkflowUpdate(d *schema.ResourceData, meta interface{}) e if d.HasChange("crypto_key_name") { updateMask = append(updateMask, "cryptoKeyName") } + + if d.HasChange("effective_labels") { + updateMask = append(updateMask, "labels") + } // updateMask is a URL parameter but not present in the schema, so ReplaceVars // won't set it url, err = transport_tpg.AddQueryParams(url, map[string]string{"updateMask": strings.Join(updateMask, ",")}) @@ -531,7 +559,18 @@ func flattenWorkflowsWorkflowState(v interface{}, d *schema.ResourceData, config } func flattenWorkflowsWorkflowLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { - return v + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed } func flattenWorkflowsWorkflowServiceAccount(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { @@ -550,6 +589,25 @@ func flattenWorkflowsWorkflowCryptoKeyName(v interface{}, d *schema.ResourceData return v } +func flattenWorkflowsWorkflowTerraformLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + if v == nil { + return v + } + + transformed := make(map[string]interface{}) + if l, ok := d.GetOkExists("terraform_labels"); ok { + for k := range l.(map[string]interface{}) { + transformed[k] = v.(map[string]interface{})[k] + } + } + + return transformed +} + +func flattenWorkflowsWorkflowEffectiveLabels(v interface{}, d *schema.ResourceData, config *transport_tpg.Config) interface{} { + return v +} + func expandWorkflowsWorkflowName(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -558,17 +616,6 @@ func expandWorkflowsWorkflowDescription(v interface{}, d tpgresource.TerraformRe return v, nil } -func expandWorkflowsWorkflowLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { - if v == nil { - return map[string]string{}, nil - } - m := make(map[string]string) - for k, val := range v.(map[string]interface{}) { - m[k] = val.(string) - } - return m, nil -} - func expandWorkflowsWorkflowServiceAccount(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (interface{}, error) { return v, nil } @@ -581,6 +628,17 @@ func expandWorkflowsWorkflowCryptoKeyName(v interface{}, d tpgresource.Terraform return v, nil } +func expandWorkflowsWorkflowEffectiveLabels(v interface{}, d tpgresource.TerraformResourceData, config *transport_tpg.Config) (map[string]string, error) { + if v == nil { + return map[string]string{}, nil + } + m := make(map[string]string) + for k, val := range v.(map[string]interface{}) { + m[k] = val.(string) + } + return m, nil +} + func resourceWorkflowsWorkflowEncoder(d *schema.ResourceData, meta interface{}, obj map[string]interface{}) (map[string]interface{}, error) { var ResName string if v, ok := d.GetOk("name"); ok { diff --git a/google/services/workflows/resource_workflows_workflow_generated_test.go b/google/services/workflows/resource_workflows_workflow_generated_test.go index a852e1f4a8e..68c3dc1e38c 100644 --- a/google/services/workflows/resource_workflows_workflow_generated_test.go +++ b/google/services/workflows/resource_workflows_workflow_generated_test.go @@ -61,6 +61,9 @@ resource "google_workflows_workflow" "example" { region = "us-central1" description = "Magic" service_account = google_service_account.test_account.id + labels = { + env = "test" + } source_contents = <<-EOF # This is a sample workflow. You can replace it with your source code. # diff --git a/google/sweeper/gcp_sweeper_test.go b/google/sweeper/gcp_sweeper_test.go index dc50a90d4dd..4484dc93fbf 100644 --- a/google/sweeper/gcp_sweeper_test.go +++ b/google/sweeper/gcp_sweeper_test.go @@ -33,7 +33,6 @@ import ( _ "github.com/hashicorp/terraform-provider-google/google/services/cloudfunctions2" _ "github.com/hashicorp/terraform-provider-google/google/services/cloudidentity" _ "github.com/hashicorp/terraform-provider-google/google/services/cloudids" - _ "github.com/hashicorp/terraform-provider-google/google/services/cloudiot" _ "github.com/hashicorp/terraform-provider-google/google/services/cloudrun" _ "github.com/hashicorp/terraform-provider-google/google/services/cloudrunv2" _ "github.com/hashicorp/terraform-provider-google/google/services/cloudscheduler" @@ -62,7 +61,6 @@ import ( _ "github.com/hashicorp/terraform-provider-google/google/services/essentialcontacts" _ "github.com/hashicorp/terraform-provider-google/google/services/filestore" _ "github.com/hashicorp/terraform-provider-google/google/services/firestore" - _ "github.com/hashicorp/terraform-provider-google/google/services/gameservices" _ "github.com/hashicorp/terraform-provider-google/google/services/gkebackup" _ "github.com/hashicorp/terraform-provider-google/google/services/gkehub" _ "github.com/hashicorp/terraform-provider-google/google/services/gkehub2" diff --git a/google/tpgdclresource/tpgtools_utils.go b/google/tpgdclresource/tpgtools_utils.go index 331044df025..e225da3f3b5 100644 --- a/google/tpgdclresource/tpgtools_utils.go +++ b/google/tpgdclresource/tpgtools_utils.go @@ -3,6 +3,7 @@ package tpgdclresource import ( + "context" "fmt" "log" @@ -26,3 +27,24 @@ func HandleNotFoundDCLError(err error, d *schema.ResourceData, resourceName stri return errwrap.Wrapf( fmt.Sprintf("Error when reading or editing %s: {{err}}", resourceName), err) } + +func ResourceContainerAwsNodePoolCustomizeDiffFunc(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { + count := diff.Get("update_settings.#").(int) + if count < 1 { + return nil + } + + oMaxSurge, nMaxSurge := diff.GetChange("update_settings.0.surge_settings.0.max_surge") + oMaxUnavailable, nMaxUnavailable := diff.GetChange("update_settings.0.surge_settings.0.max_unavailable") + + // Server default of maxSurge = 1 and maxUnavailable = 0 is not returned + // Clear the diff if trying to resolve these specific values + if oMaxSurge == 0 && nMaxSurge == 1 && oMaxUnavailable == 0 && nMaxUnavailable == 0 { + err := diff.Clear("update_settings") + if err != nil { + return err + } + } + + return nil +} diff --git a/google/tpgresource/annotations.go b/google/tpgresource/annotations.go new file mode 100644 index 00000000000..ec286a7470c --- /dev/null +++ b/google/tpgresource/annotations.go @@ -0,0 +1,97 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package tpgresource + +import ( + "context" + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func SetAnnotationsDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + raw := d.Get("annotations") + if raw == nil { + return nil + } + + if d.Get("effective_annotations") == nil { + return fmt.Errorf("`effective_annotations` field is not present in the resource schema.") + } + + o, n := d.GetChange("annotations") + effectiveAnnotations := d.Get("effective_annotations").(map[string]interface{}) + + for k, v := range n.(map[string]interface{}) { + effectiveAnnotations[k] = v.(string) + } + + for k := range o.(map[string]interface{}) { + if _, ok := n.(map[string]interface{})[k]; !ok { + delete(effectiveAnnotations, k) + } + } + + if err := d.SetNew("effective_annotations", effectiveAnnotations); err != nil { + return fmt.Errorf("error setting new effective_annotations diff: %w", err) + } + + return nil +} + +func SetMetadataAnnotationsDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + l := d.Get("metadata").([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil + } + + raw := d.Get("metadata.0.annotations") + if raw == nil { + return nil + } + + if d.Get("metadata.0.effective_annotations") == nil { + return fmt.Errorf("`metadata.0.effective_annotations` field is not present in the resource schema.") + } + + o, n := d.GetChange("metadata.0.annotations") + effectiveAnnotations := d.Get("metadata.0.effective_annotations").(map[string]interface{}) + + for k, v := range n.(map[string]interface{}) { + effectiveAnnotations[k] = v.(string) + } + + for k := range o.(map[string]interface{}) { + if _, ok := n.(map[string]interface{})[k]; !ok { + delete(effectiveAnnotations, k) + } + } + + original := l[0].(map[string]interface{}) + original["effective_annotations"] = effectiveAnnotations + + if err := d.SetNew("metadata", []interface{}{original}); err != nil { + return fmt.Errorf("error setting new metadata diff: %w", err) + } + + return nil +} + +// Sets the "annotations" field with the value of the field "effective_annotations" for data sources. +// When reading data source, as the annotations field is unavailable in the configuration of the data source, +// the "annotations" field will be empty. With this funciton, the labels "annotations" will have all of annotations in the resource. +func SetDataSourceAnnotations(d *schema.ResourceData) error { + effectiveAnnotations := d.Get("effective_annotations") + if effectiveAnnotations == nil { + return nil + } + + if d.Get("annotations") == nil { + return fmt.Errorf("`annotations` field is not present in the resource schema.") + } + if err := d.Set("annotations", effectiveAnnotations); err != nil { + return fmt.Errorf("Error setting annotations in data source: %s", err) + } + + return nil +} diff --git a/google/tpgresource/field_helpers.go b/google/tpgresource/field_helpers.go index fa32fa7feb3..b3b0cace175 100644 --- a/google/tpgresource/field_helpers.go +++ b/google/tpgresource/field_helpers.go @@ -389,7 +389,8 @@ func GetRegionFromSchema(regionSchemaField, zoneSchemaField string, d TerraformR return GetResourceNameFromSelfLink(v.(string)), nil } if v, ok := d.GetOk(zoneSchemaField); ok && zoneSchemaField != "" { - return GetRegionFromZone(v.(string)), nil + zone := GetResourceNameFromSelfLink(v.(string)) + return GetRegionFromZone(zone), nil } if config.Region != "" { return config.Region, nil diff --git a/google/tpgresource/labels.go b/google/tpgresource/labels.go new file mode 100644 index 00000000000..f4de95baa12 --- /dev/null +++ b/google/tpgresource/labels.go @@ -0,0 +1,200 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 +package tpgresource + +import ( + "context" + "fmt" + "log" + "strings" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" +) + +// SetLabels is called in the READ method of the resources to set +// the field "labels" and "terraform_labels" in the state based on the labels field in the configuration. +// So the field "labels" and "terraform_labels" in the state will only have the user defined labels. +// param "labels" is all of labels returned from API read reqeust. +// param "lineage" is the terraform lineage of the field and could be "labels" or "terraform_labels". +func SetLabels(labels map[string]string, d *schema.ResourceData, lineage string) error { + transformed := make(map[string]interface{}) + + if v, ok := d.GetOk(lineage); ok { + if labels != nil { + for k := range v.(map[string]interface{}) { + transformed[k] = labels[k] + } + } + } + + return d.Set(lineage, transformed) +} + +// Sets the "labels" field and "terraform_labels" with the value of the field "effective_labels" for data sources. +// When reading data source, as the labels field is unavailable in the configuration of the data source, +// the "labels" field will be empty. With this funciton, the labels "field" will have all of labels in the resource. +func SetDataSourceLabels(d *schema.ResourceData) error { + effectiveLabels := d.Get("effective_labels") + if effectiveLabels == nil { + return nil + } + + if d.Get("labels") == nil { + return fmt.Errorf("`labels` field is not present in the resource schema.") + } + if err := d.Set("labels", effectiveLabels); err != nil { + return fmt.Errorf("Error setting labels in data source: %s", err) + } + + if d.Get("terraform_labels") == nil { + return fmt.Errorf("`terraform_labels` field is not present in the resource schema.") + } + if err := d.Set("terraform_labels", effectiveLabels); err != nil { + return fmt.Errorf("Error setting terraform_labels in data source: %s", err) + } + + return nil +} + +func SetLabelsDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + raw := d.Get("labels") + if raw == nil { + return nil + } + + if d.Get("terraform_labels") == nil { + return fmt.Errorf("`terraform_labels` field is not present in the resource schema.") + } + + if d.Get("effective_labels") == nil { + return fmt.Errorf("`effective_labels` field is not present in the resource schema.") + } + + config := meta.(*transport_tpg.Config) + + // Merge provider default labels with the user defined labels in the resource to get terraform managed labels + terraformLabels := make(map[string]string) + for k, v := range config.DefaultLabels { + terraformLabels[k] = v + } + + labels := raw.(map[string]interface{}) + for k, v := range labels { + terraformLabels[k] = v.(string) + } + + if err := d.SetNew("terraform_labels", terraformLabels); err != nil { + return fmt.Errorf("error setting new terraform_labels diff: %w", err) + } + + o, n := d.GetChange("terraform_labels") + effectiveLabels := d.Get("effective_labels").(map[string]interface{}) + + for k, v := range n.(map[string]interface{}) { + effectiveLabels[k] = v.(string) + } + + for k := range o.(map[string]interface{}) { + if _, ok := n.(map[string]interface{})[k]; !ok { + delete(effectiveLabels, k) + } + } + + if err := d.SetNew("effective_labels", effectiveLabels); err != nil { + return fmt.Errorf("error setting new effective_labels diff: %w", err) + } + + return nil +} + +func SetMetadataLabelsDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + l := d.Get("metadata").([]interface{}) + if len(l) == 0 || l[0] == nil { + return nil + } + + raw := d.Get("metadata.0.labels") + if raw == nil { + return nil + } + + if d.Get("metadata.0.terraform_labels") == nil { + return fmt.Errorf("`metadata.0.terraform_labels` field is not present in the resource schema.") + } + + if d.Get("metadata.0.effective_labels") == nil { + return fmt.Errorf("`metadata.0.effective_labels` field is not present in the resource schema.") + } + + config := meta.(*transport_tpg.Config) + + // Merge provider default labels with the user defined labels in the resource to get terraform managed labels + terraformLabels := make(map[string]string) + for k, v := range config.DefaultLabels { + terraformLabels[k] = v + } + + labels := raw.(map[string]interface{}) + for k, v := range labels { + terraformLabels[k] = v.(string) + } + + original := l[0].(map[string]interface{}) + + original["terraform_labels"] = terraformLabels + if err := d.SetNew("metadata", []interface{}{original}); err != nil { + return fmt.Errorf("error setting new metadata diff: %w", err) + } + + o, n := d.GetChange("metadata.0.terraform_labels") + effectiveLabels := d.Get("metadata.0.effective_labels").(map[string]interface{}) + + for k, v := range n.(map[string]interface{}) { + effectiveLabels[k] = v.(string) + } + + for k := range o.(map[string]interface{}) { + if _, ok := n.(map[string]interface{})[k]; !ok { + delete(effectiveLabels, k) + } + } + + original["effective_labels"] = effectiveLabels + if err := d.SetNew("metadata", []interface{}{original}); err != nil { + return fmt.Errorf("error setting new metadata diff: %w", err) + } + + return nil +} + +// Upgrade the field "labels" in the state to exclude the labels with the labels prefix +// and the field "effective_labels" to have all of labels, including the labels with the labels prefix +func LabelsStateUpgrade(rawState map[string]interface{}, labesPrefix string) (map[string]interface{}, error) { + log.Printf("[DEBUG] Attributes before migration: %#v", rawState) + log.Printf("[DEBUG] Attributes before migration labels: %#v", rawState["labels"]) + log.Printf("[DEBUG] Attributes before migration effective_labels: %#v", rawState["effective_labels"]) + + if rawState["labels"] != nil { + rawLabels := rawState["labels"].(map[string]interface{}) + labels := make(map[string]interface{}) + effectiveLabels := make(map[string]interface{}) + + for k, v := range rawLabels { + effectiveLabels[k] = v + + if !strings.HasPrefix(k, labesPrefix) { + labels[k] = v + } + } + + rawState["labels"] = labels + rawState["effective_labels"] = effectiveLabels + } + + log.Printf("[DEBUG] Attributes after migration: %#v", rawState) + log.Printf("[DEBUG] Attributes after migration labels: %#v", rawState["labels"]) + log.Printf("[DEBUG] Attributes after migration effective_labels: %#v", rawState["effective_labels"]) + + return rawState, nil +} diff --git a/google/tpgresource/regional_utils.go b/google/tpgresource/regional_utils.go index 8328380f0b0..8962cff0ded 100644 --- a/google/tpgresource/regional_utils.go +++ b/google/tpgresource/regional_utils.go @@ -19,17 +19,25 @@ func IsZone(location string) bool { // - location argument in the resource config // - region argument in the resource config // - zone argument in the resource config +// - region argument in the provider config // - zone argument set in the provider config func GetLocation(d TerraformResourceData, config *transport_tpg.Config) (string, error) { if v, ok := d.GetOk("location"); ok { - return v.(string), nil + return GetResourceNameFromSelfLink(v.(string)), nil } else if v, isRegionalCluster := d.GetOk("region"); isRegionalCluster { - return v.(string), nil + return GetResourceNameFromSelfLink(v.(string)), nil } else { - // If region is not explicitly set, use "zone" (or fall back to the provider-level zone). - // For now, to avoid confusion, we require region to be set in the config to create a regional - // cluster rather than falling back to the provider-level region. - return GetZone(d, config) + if v, ok := d.GetOk("zone"); ok { + return GetResourceNameFromSelfLink(v.(string)), nil + } else { + if config.Region != "" { + return GetResourceNameFromSelfLink(config.Region), nil + } else if config.Zone != "" { + return GetResourceNameFromSelfLink(config.Zone), nil + } else { + return "", fmt.Errorf("Unable to determine location: region/zone not configured in resource/provider config") + } + } } } diff --git a/google/tpgresource/utils.go b/google/tpgresource/utils.go index c06f0801165..7957091f4e3 100644 --- a/google/tpgresource/utils.go +++ b/google/tpgresource/utils.go @@ -3,6 +3,7 @@ package tpgresource import ( + "context" "crypto/md5" "encoding/base64" "errors" @@ -20,6 +21,7 @@ import ( transport_tpg "github.com/hashicorp/terraform-provider-google/google/transport" "github.com/hashicorp/errwrap" + "github.com/hashicorp/go-cty/cty" fwDiags "github.com/hashicorp/terraform-plugin-framework/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -98,12 +100,49 @@ func GetProjectFromDiff(d *schema.ResourceDiff, config *transport_tpg.Config) (s if ok { return res.(string), nil } + if d.GetRawConfig().GetAttr("project") == cty.UnknownVal(cty.String) { + return res.(string), nil + } if config.Project != "" { return config.Project, nil } return "", fmt.Errorf("%s: required field is not set", "project") } +// getRegionFromDiff reads the "region" field from the given diff and falls +// back to the provider's value if not given. If the provider's value is not +// given, an error is returned. +func GetRegionFromDiff(d *schema.ResourceDiff, config *transport_tpg.Config) (string, error) { + res, ok := d.GetOk("region") + if ok { + return res.(string), nil + } + if d.GetRawConfig().GetAttr("region") == cty.UnknownVal(cty.String) { + return res.(string), nil + } + if config.Region != "" { + return config.Region, nil + } + return "", fmt.Errorf("%s: required field is not set", "region") +} + +// getZoneFromDiff reads the "zone" field from the given diff and falls +// back to the provider's value if not given. If the provider's value is not +// given, an error is returned. +func GetZoneFromDiff(d *schema.ResourceDiff, config *transport_tpg.Config) (string, error) { + res, ok := d.GetOk("zone") + if ok { + return res.(string), nil + } + if d.GetRawConfig().GetAttr("zone") == cty.UnknownVal(cty.String) { + return res.(string), nil + } + if config.Zone != "" { + return config.Zone, nil + } + return "", fmt.Errorf("%s: required field is not set", "zone") +} + func GetRouterLockName(region string, router string) string { return fmt.Sprintf("router/%s/%s", region, router) } @@ -167,6 +206,11 @@ func ExpandLabels(d TerraformResourceData) map[string]string { return ExpandStringMap(d, "labels") } +// ExpandEffectiveLabels pulls the value of "effective_labels" out of a TerraformResourceData as a map[string]string. +func ExpandEffectiveLabels(d TerraformResourceData) map[string]string { + return ExpandStringMap(d, "effective_labels") +} + // ExpandEnvironmentVariables pulls the value of "environment_variables" out of a schema.ResourceData as a map[string]string. func ExpandEnvironmentVariables(d *schema.ResourceData) map[string]string { return ExpandStringMap(d, "environment_variables") @@ -701,3 +745,57 @@ func GetContentMd5Hash(content []byte) string { } return base64.StdEncoding.EncodeToString(h.Sum(nil)) } + +func DefaultProviderProject(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { + + config := meta.(*transport_tpg.Config) + + //project + if project := diff.Get("project"); project != nil { + project, err := GetProjectFromDiff(diff, config) + if err != nil { + return fmt.Errorf("Failed to retrieve project, pid: %s, err: %s", project, err) + } + err = diff.SetNew("project", project) + if err != nil { + return err + } + } + return nil +} + +func DefaultProviderRegion(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { + + config := meta.(*transport_tpg.Config) + //region + if region := diff.Get("region"); region != nil { + region, err := GetRegionFromDiff(diff, config) + if err != nil { + return fmt.Errorf("Failed to retrieve region, pid: %s, err: %s", region, err) + } + err = diff.SetNew("region", region) + if err != nil { + return err + } + } + + return nil +} + +func DefaultProviderZone(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { + + config := meta.(*transport_tpg.Config) + // zone + if zone := diff.Get("zone"); zone != nil { + zone, err := GetZoneFromDiff(diff, config) + if err != nil { + return fmt.Errorf("Failed to retrieve zone, pid: %s, err: %s", zone, err) + } + err = diff.SetNew("zone", zone) + if err != nil { + return err + } + } + + return nil +} diff --git a/google/tpgresource/utils_test.go b/google/tpgresource/utils_test.go index 139c6b9f2b3..31b7694260a 100644 --- a/google/tpgresource/utils_test.go +++ b/google/tpgresource/utils_test.go @@ -241,11 +241,11 @@ func TestGetLocation(t *testing.T) { }, ExpectedLocation: "resource-location", }, - "does not shorten the location value when it is set as a self link in the resource config": { + "shortens the location value when it is set as a self link in the resource config": { ResourceConfig: map[string]interface{}{ "location": "https://www.googleapis.com/compute/v1/projects/my-project/locations/resource-location", }, - ExpectedLocation: "https://www.googleapis.com/compute/v1/projects/my-project/locations/resource-location", // No shortening takes place + ExpectedLocation: "resource-location", }, "returns the region value set in the resource config when location is not in the schema": { ResourceConfig: map[string]interface{}{ @@ -254,11 +254,11 @@ func TestGetLocation(t *testing.T) { }, ExpectedLocation: "resource-region", }, - "does not shorten the region value when it is set as a self link in the resource config": { + "shortens the region value when it is set as a self link in the resource config": { ResourceConfig: map[string]interface{}{ "region": "https://www.googleapis.com/compute/v1/projects/my-project/region/resource-region", }, - ExpectedLocation: "https://www.googleapis.com/compute/v1/projects/my-project/region/resource-region", // No shortening takes place + ExpectedLocation: "resource-region", }, "returns the zone value set in the resource config when neither location nor region in the schema": { ResourceConfig: map[string]interface{}{ @@ -280,13 +280,23 @@ func TestGetLocation(t *testing.T) { }, ExpectedLocation: "provider-zone-a", }, - "does not shorten the zone value when it is set as a self link in the provider config": { - // This behaviour makes sense because provider config values don't originate from APIs - // Users should always configure the provider with the short names of regions/zones + "returns the region value from the provider config when none of location/region/zone are set in the resource config": { + ProviderConfig: map[string]string{ + "region": "provider-region", + }, + ExpectedLocation: "provider-region", + }, + "shortens the region value when it is set as a self link in the provider config": { + ProviderConfig: map[string]string{ + "region": "https://www.googleapis.com/compute/v1/projects/my-project/region/provider-region", + }, + ExpectedLocation: "provider-region", + }, + "shortens the zone value when it is set as a self link in the provider config": { ProviderConfig: map[string]string{ "zone": "https://www.googleapis.com/compute/v1/projects/my-project/zones/provider-zone-a", }, - ExpectedLocation: "https://www.googleapis.com/compute/v1/projects/my-project/zones/provider-zone-a", + ExpectedLocation: "provider-zone-a", }, // Handling of empty strings "returns the region value set in the resource config when location is an empty string": { @@ -315,13 +325,18 @@ func TestGetLocation(t *testing.T) { }, ExpectedLocation: "provider-zone-a", }, - // Error states - "returns an error when only a region value is set in the the provider config and none of location/region/zone are set in the resource config": { + "returns the region value when only a region value is set in the the provider config and none of location/region/zone are set in the resource config": { + ResourceConfig: map[string]interface{}{ + "location": "", + "region": "", + "zone": "", + }, ProviderConfig: map[string]string{ "region": "provider-region", }, - ExpectError: true, + ExpectedLocation: "provider-region", }, + // Error states "returns an error when none of location/region/zone are set on the resource, and neither region or zone is set on the provider": { ExpectError: true, }, @@ -500,11 +515,11 @@ func TestGetRegion(t *testing.T) { }, ExpectedRegion: "resource-zone", // is truncated }, - "does not shorten region values when derived from a zone self link set in the resource config": { + "shortens region values when derived from a zone self link set in the resource config": { ResourceConfig: map[string]interface{}{ "zone": "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a", }, - ExpectedRegion: "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1", // Value is not shortenedfrom URI to name + ExpectedRegion: "us-central1", }, "returns the value of the region field in provider config when region/zone is unset in resource config": { ProviderConfig: map[string]string{ diff --git a/google/transport/config.go b/google/transport/config.go index c132c7c314e..f553c19b64f 100644 --- a/google/transport/config.go +++ b/google/transport/config.go @@ -171,6 +171,7 @@ type Config struct { UserProjectOverride bool RequestReason string RequestTimeout time.Duration + DefaultLabels map[string]string // PollInterval is passed to resource.StateChangeConf in common_operation.go // It controls the interval at which we poll for successful operations PollInterval time.Duration @@ -208,7 +209,6 @@ type Config struct { Cloudfunctions2BasePath string CloudIdentityBasePath string CloudIdsBasePath string - CloudIotBasePath string CloudRunBasePath string CloudRunV2BasePath string CloudSchedulerBasePath string @@ -237,7 +237,6 @@ type Config struct { EssentialContactsBasePath string FilestoreBasePath string FirestoreBasePath string - GameServicesBasePath string GKEBackupBasePath string GKEHubBasePath string GKEHub2BasePath string @@ -328,7 +327,6 @@ const CloudFunctionsBasePathKey = "CloudFunctions" const Cloudfunctions2BasePathKey = "Cloudfunctions2" const CloudIdentityBasePathKey = "CloudIdentity" const CloudIdsBasePathKey = "CloudIds" -const CloudIotBasePathKey = "CloudIot" const CloudRunBasePathKey = "CloudRun" const CloudRunV2BasePathKey = "CloudRunV2" const CloudSchedulerBasePathKey = "CloudScheduler" @@ -357,7 +355,6 @@ const EdgenetworkBasePathKey = "Edgenetwork" const EssentialContactsBasePathKey = "EssentialContacts" const FilestoreBasePathKey = "Filestore" const FirestoreBasePathKey = "Firestore" -const GameServicesBasePathKey = "GameServices" const GKEBackupBasePathKey = "GKEBackup" const GKEHubBasePathKey = "GKEHub" const GKEHub2BasePathKey = "GKEHub2" @@ -442,7 +439,6 @@ var DefaultBasePaths = map[string]string{ Cloudfunctions2BasePathKey: "https://cloudfunctions.googleapis.com/v2/", CloudIdentityBasePathKey: "https://cloudidentity.googleapis.com/v1/", CloudIdsBasePathKey: "https://ids.googleapis.com/v1/", - CloudIotBasePathKey: "https://cloudiot.googleapis.com/v1/", CloudRunBasePathKey: "https://{{location}}-run.googleapis.com/", CloudRunV2BasePathKey: "https://run.googleapis.com/v2/", CloudSchedulerBasePathKey: "https://cloudscheduler.googleapis.com/v1/", @@ -471,7 +467,6 @@ var DefaultBasePaths = map[string]string{ EssentialContactsBasePathKey: "https://essentialcontacts.googleapis.com/v1/", FilestoreBasePathKey: "https://file.googleapis.com/v1/", FirestoreBasePathKey: "https://firestore.googleapis.com/v1/", - GameServicesBasePathKey: "https://gameservices.googleapis.com/v1/", GKEBackupBasePathKey: "https://gkebackup.googleapis.com/v1/", GKEHubBasePathKey: "https://gkehub.googleapis.com/v1/", GKEHub2BasePathKey: "https://gkehub.googleapis.com/v1/", @@ -723,11 +718,6 @@ func HandleSDKDefaults(d *schema.ResourceData) error { "GOOGLE_CLOUD_IDS_CUSTOM_ENDPOINT", }, DefaultBasePaths[CloudIdsBasePathKey])) } - if d.Get("cloud_iot_custom_endpoint") == "" { - d.Set("cloud_iot_custom_endpoint", MultiEnvDefault([]string{ - "GOOGLE_CLOUD_IOT_CUSTOM_ENDPOINT", - }, DefaultBasePaths[CloudIotBasePathKey])) - } if d.Get("cloud_run_custom_endpoint") == "" { d.Set("cloud_run_custom_endpoint", MultiEnvDefault([]string{ "GOOGLE_CLOUD_RUN_CUSTOM_ENDPOINT", @@ -868,11 +858,6 @@ func HandleSDKDefaults(d *schema.ResourceData) error { "GOOGLE_FIRESTORE_CUSTOM_ENDPOINT", }, DefaultBasePaths[FirestoreBasePathKey])) } - if d.Get("game_services_custom_endpoint") == "" { - d.Set("game_services_custom_endpoint", MultiEnvDefault([]string{ - "GOOGLE_GAME_SERVICES_CUSTOM_ENDPOINT", - }, DefaultBasePaths[GameServicesBasePathKey])) - } if d.Get("gke_backup_custom_endpoint") == "" { d.Set("gke_backup_custom_endpoint", MultiEnvDefault([]string{ "GOOGLE_GKE_BACKUP_CUSTOM_ENDPOINT", @@ -1954,7 +1939,6 @@ func ConfigureBasePaths(c *Config) { c.Cloudfunctions2BasePath = DefaultBasePaths[Cloudfunctions2BasePathKey] c.CloudIdentityBasePath = DefaultBasePaths[CloudIdentityBasePathKey] c.CloudIdsBasePath = DefaultBasePaths[CloudIdsBasePathKey] - c.CloudIotBasePath = DefaultBasePaths[CloudIotBasePathKey] c.CloudRunBasePath = DefaultBasePaths[CloudRunBasePathKey] c.CloudRunV2BasePath = DefaultBasePaths[CloudRunV2BasePathKey] c.CloudSchedulerBasePath = DefaultBasePaths[CloudSchedulerBasePathKey] @@ -1983,7 +1967,6 @@ func ConfigureBasePaths(c *Config) { c.EssentialContactsBasePath = DefaultBasePaths[EssentialContactsBasePathKey] c.FilestoreBasePath = DefaultBasePaths[FilestoreBasePathKey] c.FirestoreBasePath = DefaultBasePaths[FirestoreBasePathKey] - c.GameServicesBasePath = DefaultBasePaths[GameServicesBasePathKey] c.GKEBackupBasePath = DefaultBasePaths[GKEBackupBasePathKey] c.GKEHubBasePath = DefaultBasePaths[GKEHubBasePathKey] c.GKEHub2BasePath = DefaultBasePaths[GKEHub2BasePathKey] diff --git a/google/transport/transport.go b/google/transport/transport.go index 300a756abfc..d1f16928831 100644 --- a/google/transport/transport.go +++ b/google/transport/transport.go @@ -138,6 +138,15 @@ func HandleNotFoundError(err error, d *schema.ResourceData, resource string) err fmt.Sprintf("Error when reading or editing %s: {{err}}", resource), err) } +func HandleDataSourceNotFoundError(err error, d *schema.ResourceData, resource, url string) error { + if IsGoogleApiErrorWithCode(err, 404) { + return fmt.Errorf("%s not found", url) + } + + return errwrap.Wrapf( + fmt.Sprintf("Error when reading or editing %s: {{err}}", resource), err) +} + func IsGoogleApiErrorWithCode(err error, errCode int) bool { gerr, ok := errwrap.GetType(err, &googleapi.Error{}).(*googleapi.Error) return ok && gerr != nil && gerr.Code == errCode diff --git a/website/docs/d/cloudfunctions_function.html.markdown b/website/docs/d/cloudfunctions_function.html.markdown index e3ca05447d0..1b4e2f7dd5b 100644 --- a/website/docs/d/cloudfunctions_function.html.markdown +++ b/website/docs/d/cloudfunctions_function.html.markdown @@ -49,7 +49,7 @@ exported: * `event_trigger` - A source that fires events in response to a condition in another service. Structure is [documented below](#nested_event_trigger). * `https_trigger_url` - If function is triggered by HTTP, trigger URL is set here. * `ingress_settings` - Controls what traffic can reach the function. -* `labels` - A map of labels applied to this function. +* `labels` - All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `service_account_email` - The service account email to be assumed by the cloud function. * `vpc_connector` - The VPC Network Connector that this cloud function can connect to. * `vpc_connector_egress_settings` - The egress settings for the connector, controlling what traffic is diverted through it. diff --git a/website/docs/d/cloudiot_registry_iam_policy.html.markdown b/website/docs/d/cloudiot_registry_iam_policy.html.markdown deleted file mode 100644 index bcf075e10df..00000000000 --- a/website/docs/d/cloudiot_registry_iam_policy.html.markdown +++ /dev/null @@ -1,57 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Cloud IoT Core" -description: |- - A datasource to retrieve the IAM policy state for Cloud IoT Core DeviceRegistry ---- - - -# `google_cloudiot_registry_iam_policy` -Retrieves the current IAM policy data for deviceregistry - - - -## example - -```hcl -data "google_cloudiot_registry_iam_policy" "policy" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name -} -``` - -## Argument Reference - -The following arguments are supported: - -* `name` - (Required) Used to find the parent resource to bind the IAM policy to -* `region` - (Optional) The region in which the created registry should reside. -If it is not provided, the provider region is used. - Used to find the parent resource to bind the IAM policy to. If not specified, - the value will be parsed from the identifier of the parent resource. If no region is provided in the parent identifier and no - region is specified, it is taken from the provider configuration. - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the project will be parsed from the identifier of the parent resource. If no project is provided in the parent identifier and no project is specified, the provider project is used. - -## Attributes Reference - -The attributes are exported: - -* `etag` - (Computed) The etag of the IAM policy. - -* `policy_data` - (Required only by `google_cloudiot_registry_iam_policy`) The policy data generated by - a `google_iam_policy` data source. diff --git a/website/docs/d/compute_disk.html.markdown b/website/docs/d/compute_disk.html.markdown index 65596b090bf..33ad9dc772a 100644 --- a/website/docs/d/compute_disk.html.markdown +++ b/website/docs/d/compute_disk.html.markdown @@ -82,7 +82,7 @@ In addition to the arguments listed above, the following computed attributes are * `description` - The optional description of this resource. -* `labels` - A map of labels applied to this disk. +* `labels` - All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `size` - Size of the persistent disk, specified in GB. diff --git a/website/docs/d/compute_image.html.markdown b/website/docs/d/compute_image.html.markdown index e9cdc9616ad..883e99fc0bf 100644 --- a/website/docs/d/compute_image.html.markdown +++ b/website/docs/d/compute_image.html.markdown @@ -71,7 +71,7 @@ exported: * `source_disk_id` - The ID value of the disk used to create this image. * `creation_timestamp` - The creation timestamp in RFC3339 text format. * `description` - An optional description of this image. -* `labels` - A map of labels applied to this image. +* `labels` - All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `label_fingerprint` - A fingerprint for the labels being applied to this image. * `licenses` - A list of applicable license URI. * `status` - The status of the image. Possible values are **FAILED**, **PENDING**, or **READY**. diff --git a/website/docs/d/compute_instance.html.markdown b/website/docs/d/compute_instance.html.markdown index d478a3eca5a..1d3e8dc0620 100644 --- a/website/docs/d/compute_instance.html.markdown +++ b/website/docs/d/compute_instance.html.markdown @@ -57,7 +57,7 @@ The following arguments are supported: * `guest_accelerator` - List of the type and count of accelerator cards attached to the instance. Structure is [documented below](#nested_guest_accelerator). -* `labels` - A set of key/value label pairs assigned to the instance. +* `labels` - All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `metadata` - Metadata key/value pairs made available within the instance. diff --git a/website/docs/d/compute_instance_template.html.markdown b/website/docs/d/compute_instance_template.html.markdown index e771c3be87c..3142f0af893 100644 --- a/website/docs/d/compute_instance_template.html.markdown +++ b/website/docs/d/compute_instance_template.html.markdown @@ -78,8 +78,7 @@ The following arguments are supported: * `instance_description` - A brief description to use for instances created from this template. -* `labels` - A set of key/value label pairs to assign to instances - created from this template, +* `labels` - All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `metadata` - Metadata key/value pairs to make available from within instances created from this template. diff --git a/website/docs/d/game_services_game_server_deployment_rollout.html.markdown b/website/docs/d/game_services_game_server_deployment_rollout.html.markdown deleted file mode 100644 index 263c218a36d..00000000000 --- a/website/docs/d/game_services_game_server_deployment_rollout.html.markdown +++ /dev/null @@ -1,68 +0,0 @@ ---- -subcategory: "Game Servers" -description: |- - Get the rollout state. ---- - -# google\_game\_services\_game\_server\_deployment\_rollout - -Use this data source to get the rollout state. - -https://cloud.google.com/game-servers/docs/reference/rest/v1beta/GameServerDeploymentRollout - -## Example Usage - - -```hcl -data "google_game_services_game_server_deployment_rollout" "qa" { - deployment_id = "tf-test-deployment-s8sn12jt2c" -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `deployment_id` - (Required) - The deployment to get the rollout state from. Only 1 rollout must be associated with each deployment. - - -## Attributes Reference - -In addition to the arguments listed above, the following attributes are exported: - -* `default_game_server_config` - - This field points to the game server config that is - applied by default to all realms and clusters. For example, - `projects/my-project/locations/global/gameServerDeployments/my-game/configs/my-config`. - - -* `game_server_config_overrides` - - The game_server_config_overrides contains the per game server config - overrides. The overrides are processed in the order they are listed. As - soon as a match is found for a cluster, the rest of the list is not - processed. Structure is [documented below](#nested_game_server_config_overrides). - -* `project` - The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -The `game_server_config_overrides` block contains: - -* `realms_selector` - - Selection by realms. Structure is [documented below](#nested_realms_selector). - -* `config_version` - - Version of the configuration. - -The `realms_selector` block contains: - -* `realms` - - List of realms to match against. - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout` - -* `name` - - The resource id of the game server deployment - eg: `projects/my-project/locations/global/gameServerDeployments/my-deployment/rollout`. diff --git a/website/docs/r/active_directory_domain.html.markdown b/website/docs/r/active_directory_domain.html.markdown index 706e2275fde..9207fe3eeab 100644 --- a/website/docs/r/active_directory_domain.html.markdown +++ b/website/docs/r/active_directory_domain.html.markdown @@ -66,6 +66,8 @@ The following arguments are supported: * `labels` - (Optional) Resource labels that can contain user-provided metadata + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `authorized_networks` - (Optional) @@ -94,6 +96,13 @@ In addition to the arguments listed above, the following computed attributes are The fully-qualified domain name of the exposed domain used by clients to connect to the service. Similar to what would be chosen for an Active Directory set up on an internal network. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/active_directory_peering.html.markdown b/website/docs/r/active_directory_peering.html.markdown index e8874dd3bb8..9b87b6f05de 100644 --- a/website/docs/r/active_directory_peering.html.markdown +++ b/website/docs/r/active_directory_peering.html.markdown @@ -101,6 +101,8 @@ The following arguments are supported: * `labels` - (Optional) Resource labels that can contain user-provided metadata + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `status` - (Optional) @@ -123,6 +125,13 @@ In addition to the arguments listed above, the following computed attributes are * `name` - Unique name of the peering in this scope including projects and location using the form: projects/{projectId}/locations/global/peerings/{peeringId}. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/alloydb_backup.html.markdown b/website/docs/r/alloydb_backup.html.markdown index 2ffb0ba8e21..21c682fb52e 100644 --- a/website/docs/r/alloydb_backup.html.markdown +++ b/website/docs/r/alloydb_backup.html.markdown @@ -48,7 +48,7 @@ resource "google_alloydb_backup" "default" { resource "google_alloydb_cluster" "default" { cluster_id = "alloydb-cluster" location = "us-central1" - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_alloydb_instance" "default" { @@ -64,16 +64,16 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } -data "google_compute_network" "default" { +resource "google_compute_network" "default" { name = "alloydb-network" } ``` @@ -102,7 +102,7 @@ resource "google_alloydb_backup" "default" { resource "google_alloydb_cluster" "default" { cluster_id = "alloydb-cluster" location = "us-central1" - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_alloydb_instance" "default" { @@ -118,16 +118,16 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } -data "google_compute_network" "default" { +resource "google_compute_network" "default" { name = "alloydb-network" } ``` diff --git a/website/docs/r/alloydb_cluster.html.markdown b/website/docs/r/alloydb_cluster.html.markdown index aa718458b13..bf3a450eaf7 100644 --- a/website/docs/r/alloydb_cluster.html.markdown +++ b/website/docs/r/alloydb_cluster.html.markdown @@ -208,6 +208,8 @@ The following arguments are supported: * `labels` - (Optional) User-defined labels for the alloydb cluster. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `encryption_config` - (Optional) @@ -467,6 +469,13 @@ In addition to the arguments listed above, the following computed attributes are Cluster created via DMS migration. Structure is [documented below](#nested_migration_source). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `encryption_info` block contains: diff --git a/website/docs/r/alloydb_instance.html.markdown b/website/docs/r/alloydb_instance.html.markdown index 99808fb7c07..e5bb5c3ccdd 100644 --- a/website/docs/r/alloydb_instance.html.markdown +++ b/website/docs/r/alloydb_instance.html.markdown @@ -52,7 +52,7 @@ resource "google_alloydb_instance" "default" { resource "google_alloydb_cluster" "default" { cluster_id = "alloydb-cluster" location = "us-central1" - network = data.google_compute_network.default.id + network = google_compute_network.default.id initial_user { password = "alloydb-cluster" @@ -61,7 +61,7 @@ resource "google_alloydb_cluster" "default" { data "google_project" "project" {} -data "google_compute_network" "default" { +resource "google_compute_network" "default" { name = "alloydb-network" } @@ -70,11 +70,11 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } @@ -106,6 +106,8 @@ The following arguments are supported: * `labels` - (Optional) User-defined labels for the alloydb instance. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `annotations` - (Optional) @@ -206,6 +208,16 @@ In addition to the arguments listed above, the following computed attributes are * `ip_address` - The IP address for the Instance. This is the connection endpoint for an end-user application. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/api_gateway_api.html.markdown b/website/docs/r/api_gateway_api.html.markdown index 6a905acc40a..058e6249833 100644 --- a/website/docs/r/api_gateway_api.html.markdown +++ b/website/docs/r/api_gateway_api.html.markdown @@ -71,6 +71,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -87,6 +90,13 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Creation timestamp in RFC3339 text format. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/api_gateway_api_config.html.markdown b/website/docs/r/api_gateway_api_config.html.markdown index 1a538d3025e..41e86a20cbe 100644 --- a/website/docs/r/api_gateway_api_config.html.markdown +++ b/website/docs/r/api_gateway_api_config.html.markdown @@ -134,6 +134,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `gateway_config` - (Optional) Immutable. Gateway specific configuration. @@ -256,6 +259,13 @@ In addition to the arguments listed above, the following computed attributes are * `service_config_id` - The ID of the associated Service Config (https://cloud.google.com/service-infrastructure/docs/glossary#config). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/api_gateway_gateway.html.markdown b/website/docs/r/api_gateway_gateway.html.markdown index a856b9a7ef3..f2c79febbb2 100644 --- a/website/docs/r/api_gateway_gateway.html.markdown +++ b/website/docs/r/api_gateway_gateway.html.markdown @@ -93,6 +93,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `region` - (Optional) The region of the gateway for the API. @@ -113,6 +116,13 @@ In addition to the arguments listed above, the following computed attributes are * `default_hostname` - The default API Gateway host name of the form {gatewayId}-{hash}.{region_code}.gateway.dev. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/artifact_registry_repository.html.markdown b/website/docs/r/artifact_registry_repository.html.markdown index 4471557ac72..32df96f0e5f 100644 --- a/website/docs/r/artifact_registry_repository.html.markdown +++ b/website/docs/r/artifact_registry_repository.html.markdown @@ -281,6 +281,9 @@ The following arguments are supported: and may only contain lowercase letters, numeric characters, underscores, and dashes. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `kms_key_name` - (Optional) The Cloud KMS resource name of the customer managed encryption key that’s @@ -558,6 +561,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The time when the repository was last updated. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/assured_workloads_workload.html.markdown b/website/docs/r/assured_workloads_workload.html.markdown index 62c584b7709..7cbaf1cf702 100644 --- a/website/docs/r/assured_workloads_workload.html.markdown +++ b/website/docs/r/assured_workloads_workload.html.markdown @@ -37,10 +37,6 @@ resource "google_assured_workloads_workload" "primary" { rotation_period = "10368000s" } - labels = { - label-one = "value-one" - } - provisioned_resources_parent = "folders/519620126891" resource_settings { @@ -55,6 +51,10 @@ resource "google_assured_workloads_workload" "primary" { resource_id = "ring" resource_type = "KEYRING" } + + labels = { + label-one = "value-one" + } } @@ -95,6 +95,8 @@ The following arguments are supported: * `labels` - (Optional) Optional. Labels applied to the workload. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `provisioned_resources_parent` - (Optional) @@ -135,12 +137,18 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. Immutable. The Workload creation timestamp. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `name` - Output only. The resource name of the workload. * `resources` - Output only. The resources associated with this workload. These resources will be created when creating the workload. If any of the projects already exist, the workload creation will fail. Always read only. +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + ## Timeouts This resource provides the following diff --git a/website/docs/r/beyondcorp_app_connection.html.markdown b/website/docs/r/beyondcorp_app_connection.html.markdown index 6b3c225d7a2..b2d00336d97 100644 --- a/website/docs/r/beyondcorp_app_connection.html.markdown +++ b/website/docs/r/beyondcorp_app_connection.html.markdown @@ -151,6 +151,9 @@ The following arguments are supported: (Optional) Resource labels to represent user provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `type` - (Optional) The type of network connectivity used by the AppConnection. Refer to @@ -196,6 +199,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/locations/{{region}}/appConnections/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/beyondcorp_app_connector.html.markdown b/website/docs/r/beyondcorp_app_connector.html.markdown index 99667a30a12..5fbf0193c8b 100644 --- a/website/docs/r/beyondcorp_app_connector.html.markdown +++ b/website/docs/r/beyondcorp_app_connector.html.markdown @@ -129,6 +129,9 @@ The following arguments are supported: (Optional) Resource labels to represent user provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -142,6 +145,13 @@ In addition to the arguments listed above, the following computed attributes are * `state` - Represents the different states of a AppConnector. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/beyondcorp_app_gateway.html.markdown b/website/docs/r/beyondcorp_app_gateway.html.markdown index 3e4975bf23e..b5adc40e1f3 100644 --- a/website/docs/r/beyondcorp_app_gateway.html.markdown +++ b/website/docs/r/beyondcorp_app_gateway.html.markdown @@ -105,6 +105,9 @@ The following arguments are supported: (Optional) Resource labels to represent user provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -125,6 +128,13 @@ In addition to the arguments listed above, the following computed attributes are A list of connections allocated for the Gateway. Structure is [documented below](#nested_allocated_connections). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `allocated_connections` block contains: diff --git a/website/docs/r/bigquery_dataset.html.markdown b/website/docs/r/bigquery_dataset.html.markdown index 9693edc7dfc..32a1aaa985b 100644 --- a/website/docs/r/bigquery_dataset.html.markdown +++ b/website/docs/r/bigquery_dataset.html.markdown @@ -271,7 +271,10 @@ The following arguments are supported: * `labels` - (Optional) The labels associated with this dataset. You can use these to - organize and group your datasets + organize and group your datasets. + + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `location` - (Optional) @@ -465,6 +468,13 @@ In addition to the arguments listed above, the following computed attributes are * `last_modified_time` - The date when this dataset or any of its tables was last modified, in milliseconds since the epoch. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/bigquery_job.html.markdown b/website/docs/r/bigquery_job.html.markdown index 1b5396fd475..768b8a66029 100644 --- a/website/docs/r/bigquery_job.html.markdown +++ b/website/docs/r/bigquery_job.html.markdown @@ -1040,6 +1040,9 @@ The following arguments are supported: (Optional) The labels associated with this job. You can use these to organize and group your jobs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `query` - (Optional) Configures a query job. @@ -1081,6 +1084,15 @@ In addition to the arguments listed above, the following computed attributes are (Output) The type of the job. +* `terraform_labels` - + (Output) + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + (Output) + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `status` - The status of this job. Examine this value when polling an asynchronous job to see if the job is complete. Structure is [documented below](#nested_status). diff --git a/website/docs/r/bigquery_routine.html.markdown b/website/docs/r/bigquery_routine.html.markdown index 86da7b97b3f..e596e40f874 100644 --- a/website/docs/r/bigquery_routine.html.markdown +++ b/website/docs/r/bigquery_routine.html.markdown @@ -125,6 +125,11 @@ The following arguments are supported: (Required) The ID of the the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters. +* `routine_type` - + (Required) + The type of routine. + Possible values are: `SCALAR_FUNCTION`, `PROCEDURE`, `TABLE_VALUED_FUNCTION`. + * `definition_body` - (Required) The body of the routine. For functions, this is the expression in the AS clause. @@ -134,11 +139,6 @@ The following arguments are supported: - - - -* `routine_type` - - (Optional) - The type of routine. - Possible values are: `SCALAR_FUNCTION`, `PROCEDURE`, `TABLE_VALUED_FUNCTION`. - * `language` - (Optional) The language of the routine. diff --git a/website/docs/r/bigquery_table.html.markdown b/website/docs/r/bigquery_table.html.markdown index 2fb0cd7fb3a..929a8e8f2c7 100644 --- a/website/docs/r/bigquery_table.html.markdown +++ b/website/docs/r/bigquery_table.html.markdown @@ -114,6 +114,15 @@ The following arguments are supported: * `labels` - (Optional) A mapping of labels to assign to the resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `schema` - (Optional) A JSON schema for the table. ~>**NOTE:** Because this field expects a JSON string, any changes to the diff --git a/website/docs/r/bigtable_instance.html.markdown b/website/docs/r/bigtable_instance.html.markdown index a6f1d14cb34..b0176072208 100644 --- a/website/docs/r/bigtable_instance.html.markdown +++ b/website/docs/r/bigtable_instance.html.markdown @@ -102,6 +102,14 @@ in Terraform state, a `terraform destroy` or `terraform apply` that would delete * `labels` - (Optional) A set of key/value label pairs to assign to the resource. Label keys must follow the requirements at https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. ----- diff --git a/website/docs/r/certificate_manager_certificate.html.markdown b/website/docs/r/certificate_manager_certificate.html.markdown index 518872c14e0..a33d1401d3c 100644 --- a/website/docs/r/certificate_manager_certificate.html.markdown +++ b/website/docs/r/certificate_manager_certificate.html.markdown @@ -40,6 +40,9 @@ resource "google_certificate_manager_certificate" "default" { name = "dns-cert" description = "The default cert" scope = "EDGE_CACHE" + labels = { + env = "test" + } managed { domains = [ google_certificate_manager_dns_authorization.instance.domain, @@ -209,6 +212,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the Certificate resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `scope` - (Optional) @@ -340,6 +345,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/certificates/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/certificate_manager_certificate_issuance_config.html.markdown b/website/docs/r/certificate_manager_certificate_issuance_config.html.markdown index 73ad8dfdec8..d342563ac3e 100644 --- a/website/docs/r/certificate_manager_certificate_issuance_config.html.markdown +++ b/website/docs/r/certificate_manager_certificate_issuance_config.html.markdown @@ -160,6 +160,9 @@ The following arguments are supported: 'Set of label tags associated with the CertificateIssuanceConfig resource. An object containing a list of "key": value pairs. Example: { "name": "wrench", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `location` - (Optional) The Certificate Manager location. If not specified, "global" is used. @@ -184,6 +187,13 @@ In addition to the arguments listed above, the following computed attributes are accurate to nanoseconds with up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/certificate_manager_certificate_map.html.markdown b/website/docs/r/certificate_manager_certificate_map.html.markdown index 6eddf8570cf..1fc87894f1d 100644 --- a/website/docs/r/certificate_manager_certificate_map.html.markdown +++ b/website/docs/r/certificate_manager_certificate_map.html.markdown @@ -66,6 +66,9 @@ The following arguments are supported: (Optional) Set of labels associated with a Certificate Map resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -90,6 +93,13 @@ In addition to the arguments listed above, the following computed attributes are A list of target proxies that use this Certificate Map Structure is [documented below](#nested_gclb_targets). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `gclb_targets` block contains: diff --git a/website/docs/r/certificate_manager_certificate_map_entry.html.markdown b/website/docs/r/certificate_manager_certificate_map_entry.html.markdown index 71632a447eb..636191035f3 100644 --- a/website/docs/r/certificate_manager_certificate_map_entry.html.markdown +++ b/website/docs/r/certificate_manager_certificate_map_entry.html.markdown @@ -120,6 +120,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `hostname` - (Optional) A Hostname (FQDN, e.g. example.com) or a wildcard hostname expression (*.example.com) @@ -153,6 +156,13 @@ In addition to the arguments listed above, the following computed attributes are * `state` - A serving state of this Certificate Map Entry. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/certificate_manager_dns_authorization.html.markdown b/website/docs/r/certificate_manager_dns_authorization.html.markdown index 6e209fe4f33..10a971c0b50 100644 --- a/website/docs/r/certificate_manager_dns_authorization.html.markdown +++ b/website/docs/r/certificate_manager_dns_authorization.html.markdown @@ -79,6 +79,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the DNS Authorization resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -96,6 +98,13 @@ In addition to the arguments listed above, the following computed attributes are certificate. Structure is [documented below](#nested_dns_resource_record). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `dns_resource_record` block contains: diff --git a/website/docs/r/certificate_manager_trust_config.html.markdown b/website/docs/r/certificate_manager_trust_config.html.markdown index 5e4306c2c82..d684061912c 100644 --- a/website/docs/r/certificate_manager_trust_config.html.markdown +++ b/website/docs/r/certificate_manager_trust_config.html.markdown @@ -81,6 +81,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the trust config. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -142,6 +144,13 @@ In addition to the arguments listed above, the following computed attributes are A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/cloud_run_domain_mapping.html.markdown b/website/docs/r/cloud_run_domain_mapping.html.markdown index 9c575000ff3..d8be9d2c373 100644 --- a/website/docs/r/cloud_run_domain_mapping.html.markdown +++ b/website/docs/r/cloud_run_domain_mapping.html.markdown @@ -78,11 +78,6 @@ The following arguments are supported: The spec for this DomainMapping. Structure is [documented below](#nested_spec). -* `metadata` - - (Required) - Metadata associated with this DomainMapping. - Structure is [documented below](#nested_metadata). - * `location` - (Required) The location of the cloud run instance. eg us-central1 @@ -108,6 +103,18 @@ The following arguments are supported: Default value is `AUTOMATIC`. Possible values are: `NONE`, `AUTOMATIC`. +- - - + + +* `metadata` - + (Optional) + Metadata associated with this DomainMapping. + Structure is [documented below](#nested_metadata). + +* `project` - (Optional) The ID of the project in which the resource belongs. + If it is not provided, the provider project is used. + + The `metadata` block supports: * `labels` - @@ -116,6 +123,8 @@ The following arguments are supported: (scope and select) objects. May match selectors of replication controllers and routes. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `generation` - (Output) @@ -155,12 +164,18 @@ The following arguments are supported: If terraform plan shows a diff where a server-side annotation is added, you can add it to your config or apply the lifecycle.ignore_changes rule to the metadata.0.annotations field. -- - - - +* `terraform_labels` - + (Output) + The combination of labels configured directly on the resource + and default labels configured on the provider. -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. +* `effective_labels` - + (Output) + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. +* `effective_annotations` - + (Output) + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. ## Attributes Reference diff --git a/website/docs/r/cloud_run_service.html.markdown b/website/docs/r/cloud_run_service.html.markdown index 47cc97e18b7..cc16e788568 100644 --- a/website/docs/r/cloud_run_service.html.markdown +++ b/website/docs/r/cloud_run_service.html.markdown @@ -920,6 +920,8 @@ this field is set to false, the revision name will still autogenerate.) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and routes. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `generation` - (Output) @@ -968,6 +970,19 @@ this field is set to false, the revision name will still autogenerate.) - `run.googleapis.com/launch-stage` sets the [launch stage](https://cloud.google.com/run/docs/troubleshooting#launch-stage-validation) when a preview feature is used. For example, `"run.googleapis.com/launch-stage": "BETA"` +* `terraform_labels` - + (Output) + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + (Output) + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + (Output) + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: diff --git a/website/docs/r/cloud_run_v2_job.html.markdown b/website/docs/r/cloud_run_v2_job.html.markdown index 95e4a5e77c3..53b4535b5a5 100644 --- a/website/docs/r/cloud_run_v2_job.html.markdown +++ b/website/docs/r/cloud_run_v2_job.html.markdown @@ -472,22 +472,6 @@ The following arguments are supported: (Optional) Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. -* `liveness_probe` - - (Optional, Deprecated) - Periodic probe of container liveness. Container will be restarted if the probe fails. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes - This field is not supported in Cloud Run Job currently. - Structure is [documented below](#nested_liveness_probe). - - ~> **Warning:** `liveness_probe` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API. - -* `startup_probe` - - (Optional, Deprecated) - Startup probe of application within the container. All other probes are disabled if a startup probe is provided, until it succeeds. Container will not be added to service endpoints if the probe fails. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes - This field is not supported in Cloud Run Job currently. - Structure is [documented below](#nested_startup_probe). - - ~> **Warning:** `startup_probe` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API. - The `env` block supports: @@ -549,120 +533,6 @@ The following arguments are supported: (Required) Path within the container at which the volume should be mounted. Must not contain ':'. For Cloud SQL volumes, it can be left empty, or must otherwise be /cloudsql. All instances defined in the Volume will be available as /cloudsql/[instance]. For more information on Cloud SQL volumes, visit https://cloud.google.com/sql/docs/mysql/connect-run -The `liveness_probe` block supports: - -* `initial_delay_seconds` - - (Optional) - Number of seconds after the container has started before the probe is initiated. Defaults to 0 seconds. Minimum value is 0. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes - -* `timeout_seconds` - - (Optional) - Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Maximum value is 3600. Must be smaller than periodSeconds. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes - -* `period_seconds` - - (Optional) - How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. Must be greater or equal than timeoutSeconds - -* `failure_threshold` - - (Optional) - Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. - -* `http_get` - - (Optional) - HTTPGet specifies the http request to perform. Exactly one of HTTPGet or TCPSocket must be specified. - Structure is [documented below](#nested_http_get). - -* `tcp_socket` - - (Optional) - TCPSocket specifies an action involving a TCP port. Exactly one of HTTPGet or TCPSocket must be specified. - Structure is [documented below](#nested_tcp_socket). - - -The `http_get` block supports: - -* `path` - - (Optional) - Path to access on the HTTP server. Defaults to '/'. - -* `http_headers` - - (Optional) - Custom headers to set in the request. HTTP allows repeated headers. - Structure is [documented below](#nested_http_headers). - - -The `http_headers` block supports: - -* `name` - - (Required) - The header field name - -* `value` - - (Optional) - The header field value - -The `tcp_socket` block supports: - -* `port` - - (Optional) - Port number to access on the container. Must be in the range 1 to 65535. If not specified, defaults to 8080. - -The `startup_probe` block supports: - -* `initial_delay_seconds` - - (Optional) - Number of seconds after the container has started before the probe is initiated. Defaults to 0 seconds. Minimum value is 0. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes - -* `timeout_seconds` - - (Optional) - Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. Maximum value is 3600. Must be smaller than periodSeconds. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes - -* `period_seconds` - - (Optional) - How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. Maximum value for liveness probe is 3600. Maximum value for startup probe is 240. Must be greater or equal than timeoutSeconds - -* `failure_threshold` - - (Optional) - Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. - -* `http_get` - - (Optional) - HTTPGet specifies the http request to perform. Exactly one of HTTPGet or TCPSocket must be specified. - Structure is [documented below](#nested_http_get). - -* `tcp_socket` - - (Optional) - TCPSocket specifies an action involving a TCP port. Exactly one of HTTPGet or TCPSocket must be specified. - Structure is [documented below](#nested_tcp_socket). - - -The `http_get` block supports: - -* `path` - - (Optional) - Path to access on the HTTP server. Defaults to '/'. - -* `http_headers` - - (Optional) - Custom headers to set in the request. HTTP allows repeated headers. - Structure is [documented below](#nested_http_headers). - - -The `http_headers` block supports: - -* `name` - - (Required) - The header field name - -* `value` - - (Optional) - The header field value - -The `tcp_socket` block supports: - -* `port` - - (Optional) - Port number to access on the container. Must be in the range 1 to 65535. If not specified, defaults to 8080. - The `volumes` block supports: * `name` - @@ -777,6 +647,8 @@ The following arguments are supported: environment, state, etc. For more information, visit https://cloud.google.com/resource-manager/docs/creating-managing-labels or https://cloud.google.com/run/docs/configuring/labels. Cloud Run API v2 does not support labels with `run.googleapis.com`, `cloud.googleapis.com`, `serving.knative.dev`, or `autoscaling.knative.dev` namespaces, and they will be rejected. All system labels in v1 now have a corresponding field in v2 Job. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `annotations` - (Optional) @@ -880,6 +752,16 @@ In addition to the arguments listed above, the following computed attributes are * `etag` - A system-generated fingerprint for this version of the resource. May be used to detect modification conflict during updates. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `terminal_condition` block contains: diff --git a/website/docs/r/cloud_run_v2_service.html.markdown b/website/docs/r/cloud_run_v2_service.html.markdown index 1379a5ffd11..8ab1c3029e5 100644 --- a/website/docs/r/cloud_run_v2_service.html.markdown +++ b/website/docs/r/cloud_run_v2_service.html.markdown @@ -629,13 +629,6 @@ The following arguments are supported: HTTPGet specifies the http request to perform. Structure is [documented below](#nested_http_get). -* `tcp_socket` - - (Optional, Deprecated) - TCPSocket specifies an action involving a TCP port. This field is not supported in liveness probe currently. - Structure is [documented below](#nested_tcp_socket). - - ~> **Warning:** `tcp_socket` is deprecated and will be removed in a future major release. This field is not supported by the Cloud Run API. - * `grpc` - (Optional) GRPC specifies an action involving a GRPC port. @@ -669,12 +662,6 @@ The following arguments are supported: (Optional) The header field value -The `tcp_socket` block supports: - -* `port` - - (Optional) - Port number to access on the container. Must be in the range 1 to 65535. If not specified, defaults to 8080. - The `grpc` block supports: * `port` - @@ -852,6 +839,8 @@ The following arguments are supported: environment, state, etc. For more information, visit https://cloud.google.com/resource-manager/docs/creating-managing-labels or https://cloud.google.com/run/docs/configuring/labels. Cloud Run API v2 does not support labels with `run.googleapis.com`, `cloud.googleapis.com`, `serving.knative.dev`, or `autoscaling.knative.dev` namespaces, and they will be rejected. All system labels in v1 now have a corresponding field in v2 Service. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `annotations` - (Optional) @@ -995,6 +984,16 @@ In addition to the arguments listed above, the following computed attributes are * `etag` - A system-generated fingerprint for this version of the resource. May be used to detect modification conflict during updates. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `terminal_condition` block contains: diff --git a/website/docs/r/cloudbuild_bitbucket_server_config.html.markdown b/website/docs/r/cloudbuild_bitbucket_server_config.html.markdown index 45b1e9b685f..94cfc1fed80 100644 --- a/website/docs/r/cloudbuild_bitbucket_server_config.html.markdown +++ b/website/docs/r/cloudbuild_bitbucket_server_config.html.markdown @@ -91,8 +91,8 @@ resource "google_project_service" "servicenetworking" { service = "servicenetworking.googleapis.com" disable_on_destroy = false } - -data "google_compute_network" "vpc_network" { + +resource "google_compute_network" "vpc_network" { name = "vpc-network" depends_on = [google_project_service.servicenetworking] } @@ -102,11 +102,11 @@ resource "google_compute_global_address" "private_ip_alloc" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.vpc_network.id + network = google_compute_network.vpc_network.id } resource "google_service_networking_connection" "default" { - network = data.google_compute_network.vpc_network.id + network = google_compute_network.vpc_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] depends_on = [google_project_service.servicenetworking] @@ -123,7 +123,7 @@ resource "google_cloudbuild_bitbucket_server_config" "bbs-config-with-peered-net } username = "test" api_key = "" - peered_network = replace(data.google_compute_network.vpc_network.id, data.google_project.project.name, data.google_project.project.number) + peered_network = replace(google_compute_network.vpc_network.id, data.google_project.project.name, data.google_project.project.number) ssl_ca = "-----BEGIN CERTIFICATE-----\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\n-----END CERTIFICATE-----\n" depends_on = [google_service_networking_connection.default] } diff --git a/website/docs/r/cloudbuildv2_connection.html.markdown b/website/docs/r/cloudbuildv2_connection.html.markdown index 71830a48e65..f4b59b61808 100644 --- a/website/docs/r/cloudbuildv2_connection.html.markdown +++ b/website/docs/r/cloudbuildv2_connection.html.markdown @@ -168,6 +168,8 @@ The `read_authorizer_credential` block supports: * `annotations` - (Optional) Allows clients to store small amounts of arbitrary data. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `disabled` - (Optional) @@ -294,6 +296,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. Server assigned timestamp for when the connection was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + * `etag` - This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. diff --git a/website/docs/r/cloudbuildv2_repository.html.markdown b/website/docs/r/cloudbuildv2_repository.html.markdown index d43f573bde9..b08504b7d6c 100644 --- a/website/docs/r/cloudbuildv2_repository.html.markdown +++ b/website/docs/r/cloudbuildv2_repository.html.markdown @@ -168,6 +168,8 @@ The following arguments are supported: * `annotations` - (Optional) Allows clients to store small amounts of arbitrary data. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `location` - (Optional) @@ -188,6 +190,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. Server assigned timestamp for when the connection was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + * `etag` - This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. diff --git a/website/docs/r/clouddeploy_delivery_pipeline.html.markdown b/website/docs/r/clouddeploy_delivery_pipeline.html.markdown index f9a5d7f037f..51c92b5446f 100644 --- a/website/docs/r/clouddeploy_delivery_pipeline.html.markdown +++ b/website/docs/r/clouddeploy_delivery_pipeline.html.markdown @@ -26,24 +26,10 @@ The Cloud Deploy `DeliveryPipeline` resource Creates a basic Cloud Deploy delivery pipeline ```hcl resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "us-west1" - name = "pipeline" - - annotations = { - my_first_annotation = "example-annotation-1" - - my_second_annotation = "example-annotation-2" - } - + location = "us-west1" + name = "pipeline" description = "basic description" - - labels = { - my_first_label = "example-label-1" - - my_second_label = "example-label-2" - } - - project = "my-project-name" + project = "my-project-name" serial_pipeline { stages { @@ -64,16 +50,6 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { target_id = "example-target-two" } } - provider = google-beta -} - -``` -## Example Usage - canary_service_networking_delivery_pipeline -Creates a basic Cloud Deploy delivery pipeline -```hcl -resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "us-west1" - name = "pipeline" annotations = { my_first_annotation = "example-annotation-1" @@ -81,15 +57,23 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { my_second_annotation = "example-annotation-2" } - description = "basic description" - labels = { my_first_label = "example-label-1" my_second_label = "example-label-2" } + provider = google-beta +} - project = "my-project-name" +``` +## Example Usage - canary_service_networking_delivery_pipeline +Creates a basic Cloud Deploy delivery pipeline +```hcl +resource "google_clouddeploy_delivery_pipeline" "primary" { + location = "us-west1" + name = "pipeline" + description = "basic description" + project = "my-project-name" serial_pipeline { stages { @@ -110,16 +94,6 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { target_id = "example-target-two" } } - provider = google-beta -} - -``` -## Example Usage - canaryrun_delivery_pipeline -Creates a basic Cloud Deploy delivery pipeline -```hcl -resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "us-west1" - name = "pipeline" annotations = { my_first_annotation = "example-annotation-1" @@ -127,15 +101,23 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { my_second_annotation = "example-annotation-2" } - description = "basic description" - labels = { my_first_label = "example-label-1" my_second_label = "example-label-2" } + provider = google-beta +} - project = "my-project-name" +``` +## Example Usage - canaryrun_delivery_pipeline +Creates a basic Cloud Deploy delivery pipeline +```hcl +resource "google_clouddeploy_delivery_pipeline" "primary" { + location = "us-west1" + name = "pipeline" + description = "basic description" + project = "my-project-name" serial_pipeline { stages { @@ -156,16 +138,6 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { target_id = "example-target-two" } } - provider = google-beta -} - -``` -## Example Usage - delivery_pipeline -Creates a basic Cloud Deploy delivery pipeline -```hcl -resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "us-west1" - name = "pipeline" annotations = { my_first_annotation = "example-annotation-1" @@ -173,15 +145,23 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { my_second_annotation = "example-annotation-2" } - description = "basic description" - labels = { my_first_label = "example-label-1" my_second_label = "example-label-2" } + provider = google-beta +} - project = "my-project-name" +``` +## Example Usage - delivery_pipeline +Creates a basic Cloud Deploy delivery pipeline +```hcl +resource "google_clouddeploy_delivery_pipeline" "primary" { + location = "us-west1" + name = "pipeline" + description = "basic description" + project = "my-project-name" serial_pipeline { stages { @@ -202,16 +182,6 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { target_id = "example-target-two" } } -} - - -``` -## Example Usage - verify_delivery_pipeline -tests creating and updating a delivery pipeline with deployment verification strategy -```hcl -resource "google_clouddeploy_delivery_pipeline" "primary" { - location = "us-west1" - name = "pipeline" annotations = { my_first_annotation = "example-annotation-1" @@ -219,15 +189,23 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { my_second_annotation = "example-annotation-2" } - description = "basic description" - labels = { my_first_label = "example-label-1" my_second_label = "example-label-2" } +} - project = "my-project-name" + +``` +## Example Usage - verify_delivery_pipeline +tests creating and updating a delivery pipeline with deployment verification strategy +```hcl +resource "google_clouddeploy_delivery_pipeline" "primary" { + location = "us-west1" + name = "pipeline" + description = "basic description" + project = "my-project-name" serial_pipeline { stages { @@ -248,7 +226,19 @@ resource "google_clouddeploy_delivery_pipeline" "primary" { target_id = "example-target-two" } } - provider = google-beta + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + + labels = { + my_first_label = "example-label-1" + + my_second_label = "example-label-2" + } + provider = google-beta } ``` @@ -298,6 +288,8 @@ The `phase_configs` block supports: * `annotations` - (Optional) User annotations. These attributes can only be set and used by the user, and not by Google Cloud Deploy. See https://google.aip.dev/128#annotations for more details such as format and size limitations. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -306,6 +298,8 @@ The `phase_configs` block supports: * `labels` - (Optional) Labels are attributes that can be set and used by both the user and by Google Cloud Deploy. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) @@ -523,9 +517,18 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. Time at which the pipeline was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `etag` - This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `uid` - Output only. Unique identifier of the `DeliveryPipeline`. diff --git a/website/docs/r/clouddeploy_target.html.markdown b/website/docs/r/clouddeploy_target.html.markdown index 680870d52cd..d0ccd07645e 100644 --- a/website/docs/r/clouddeploy_target.html.markdown +++ b/website/docs/r/clouddeploy_target.html.markdown @@ -26,15 +26,8 @@ The Cloud Deploy `Target` resource tests creating and updating a multi-target ```hcl resource "google_clouddeploy_target" "primary" { - location = "us-west1" - name = "target" - - annotations = { - my_first_annotation = "example-annotation-1" - - my_second_annotation = "example-annotation-2" - } - + location = "us-west1" + name = "target" deploy_parameters = {} description = "multi-target description" @@ -43,28 +36,12 @@ resource "google_clouddeploy_target" "primary" { execution_timeout = "3600s" } - labels = { - my_first_label = "example-label-1" - - my_second_label = "example-label-2" - } - multi_target { target_ids = ["1", "2"] } project = "my-project-name" require_approval = false - provider = google-beta -} - -``` -## Example Usage - run_target -tests creating and updating a cloud run target -```hcl -resource "google_clouddeploy_target" "primary" { - location = "us-west1" - name = "target" annotations = { my_first_annotation = "example-annotation-1" @@ -72,6 +49,21 @@ resource "google_clouddeploy_target" "primary" { my_second_annotation = "example-annotation-2" } + labels = { + my_first_label = "example-label-1" + + my_second_label = "example-label-2" + } + provider = google-beta +} + +``` +## Example Usage - run_target +tests creating and updating a cloud run target +```hcl +resource "google_clouddeploy_target" "primary" { + location = "us-west1" + name = "target" deploy_parameters = {} description = "basic description" @@ -80,19 +72,25 @@ resource "google_clouddeploy_target" "primary" { execution_timeout = "3600s" } - labels = { - my_first_label = "example-label-1" - - my_second_label = "example-label-2" - } - project = "my-project-name" require_approval = false run { location = "projects/my-project-name/locations/us-west1" } - provider = google-beta + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + + labels = { + my_first_label = "example-label-1" + + my_second_label = "example-label-2" + } + provider = google-beta } ``` @@ -103,12 +101,6 @@ resource "google_clouddeploy_target" "primary" { location = "us-west1" name = "target" - annotations = { - my_first_annotation = "example-annotation-1" - - my_second_annotation = "example-annotation-2" - } - deploy_parameters = { deployParameterKey = "deployParameterValue" } @@ -119,14 +111,20 @@ resource "google_clouddeploy_target" "primary" { cluster = "projects/my-project-name/locations/us-west1/clusters/example-cluster-name" } + project = "my-project-name" + require_approval = false + + annotations = { + my_first_annotation = "example-annotation-1" + + my_second_annotation = "example-annotation-2" + } + labels = { my_first_label = "example-label-1" my_second_label = "example-label-2" } - - project = "my-project-name" - require_approval = false } @@ -151,6 +149,8 @@ The following arguments are supported: * `annotations` - (Optional) Optional. User annotations. These attributes can only be set and used by the user, and not by Google Cloud Deploy. See https://google.aip.dev/128#annotations for more details such as format and size limitations. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `anthos_cluster` - (Optional) @@ -175,6 +175,8 @@ The following arguments are supported: * `labels` - (Optional) Optional. Labels are attributes that can be set and used by both the user and by Google Cloud Deploy. Labels must meet the following constraints: * Keys and values can contain only lowercase letters, numeric characters, underscores, and dashes. * All characters must use UTF-8 encoding, and international characters are allowed. * Keys must start with a lowercase letter or international character. * Each resource is limited to a maximum of 64 labels. Both keys and values are additionally constrained to be <= 128 bytes. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `multi_target` - (Optional) @@ -253,12 +255,21 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. Time at which the `Target` was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `etag` - Optional. This checksum is computed by the server based on the value of other fields, and may be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. * `target_id` - Output only. Resource id of the `Target`. +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `uid` - Output only. Unique identifier of the `Target`. diff --git a/website/docs/r/cloudfunctions2_function.html.markdown b/website/docs/r/cloudfunctions2_function.html.markdown index 1eb81e9c9e7..601d67083a5 100644 --- a/website/docs/r/cloudfunctions2_function.html.markdown +++ b/website/docs/r/cloudfunctions2_function.html.markdown @@ -769,6 +769,10 @@ The following arguments are supported: A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*`. +* `location` - + (Required) + The location of this cloud function. + - - - @@ -798,15 +802,14 @@ The following arguments are supported: (Optional) A set of key/value label pairs associated with this Cloud Function. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `kms_key_name` - (Optional) Resource name of a KMS crypto key (managed by the user) used to encrypt/decrypt function resources. It must match the pattern projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}. -* `location` - - (Optional) - The location of this cloud function. - * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -1116,6 +1119,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The last update timestamp of a Cloud Function. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/cloudfunctions_function.html.markdown b/website/docs/r/cloudfunctions_function.html.markdown index 40d8a29bd42..c73d6561aab 100644 --- a/website/docs/r/cloudfunctions_function.html.markdown +++ b/website/docs/r/cloudfunctions_function.html.markdown @@ -134,6 +134,15 @@ Eg. `"nodejs16"`, `"python39"`, `"dotnet3"`, `"go116"`, `"java11"`, `"ruby30"`, * `labels` - (Optional) A set of key/value label pairs to assign to the function. Label keys must follow the requirements at https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements. +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. +Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `service_account_email` - (Optional) If provided, the self-provided service account to run the function with. * `environment_variables` - (Optional) A set of key/value environment variable pairs to assign to the function. diff --git a/website/docs/r/cloudiot_device.html.markdown b/website/docs/r/cloudiot_device.html.markdown deleted file mode 100644 index 7629c56dc56..00000000000 --- a/website/docs/r/cloudiot_device.html.markdown +++ /dev/null @@ -1,263 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Cloud IoT Core" -description: |- - A Google Cloud IoT Core device. ---- - -# google\_cloudiot\_device -~> **Warning:** `google_cloudiot_device` is deprecated in the API. This resource will be removed in the next major release of the provider. - -A Google Cloud IoT Core device. - - -To get more information about Device, see: - -* [API documentation](https://cloud.google.com/iot/docs/reference/cloudiot/rest/) -* How-to Guides - * [Official Documentation](https://cloud.google.com/iot/docs/) - -## Example Usage - Cloudiot Device Basic - - -```hcl -resource "google_cloudiot_registry" "registry" { - name = "cloudiot-device-registry" -} - -resource "google_cloudiot_device" "test-device" { - name = "cloudiot-device" - registry = google_cloudiot_registry.registry.id -} -``` -## Example Usage - Cloudiot Device Full - - -```hcl -resource "google_cloudiot_registry" "registry" { - name = "cloudiot-device-registry" -} - -resource "google_cloudiot_device" "test-device" { - name = "cloudiot-device" - registry = google_cloudiot_registry.registry.id - - credentials { - public_key { - format = "RSA_PEM" - key = file("test-fixtures/rsa_public.pem") - } - } - - blocked = false - - log_level = "INFO" - - metadata = { - test_key_1 = "test_value_1" - } - - gateway_config { - gateway_type = "NON_GATEWAY" - } -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `name` - - (Required) - A unique name for the resource. - -* `registry` - - (Required) - The name of the device registry where this device should be created. - - -- - - - - -* `credentials` - - (Optional) - The credentials used to authenticate this device. - Structure is [documented below](#nested_credentials). - -* `blocked` - - (Optional) - If a device is blocked, connections or requests from this device will fail. - -* `log_level` - - (Optional) - The logging verbosity for device activity. - Possible values are: `NONE`, `ERROR`, `INFO`, `DEBUG`. - -* `metadata` - - (Optional) - The metadata key-value pairs assigned to the device. - -* `gateway_config` - - (Optional) - Gateway-related configuration and state. - Structure is [documented below](#nested_gateway_config). - - -The `credentials` block supports: - -* `expiration_time` - - (Optional) - The time at which this credential becomes invalid. - -* `public_key` - - (Required) - A public key used to verify the signature of JSON Web Tokens (JWTs). - Structure is [documented below](#nested_public_key). - - -The `public_key` block supports: - -* `format` - - (Required) - The format of the key. - Possible values are: `RSA_PEM`, `RSA_X509_PEM`, `ES256_PEM`, `ES256_X509_PEM`. - -* `key` - - (Required) - The key data. - -The `gateway_config` block supports: - -* `gateway_type` - - (Optional) - Indicates whether the device is a gateway. - Default value is `NON_GATEWAY`. - Possible values are: `GATEWAY`, `NON_GATEWAY`. - -* `gateway_auth_method` - - (Optional) - Indicates whether the device is a gateway. - Possible values are: `ASSOCIATION_ONLY`, `DEVICE_AUTH_TOKEN_ONLY`, `ASSOCIATION_AND_DEVICE_AUTH_TOKEN`. - -* `last_accessed_gateway_id` - - (Output) - The ID of the gateway the device accessed most recently. - -* `last_accessed_gateway_time` - - (Output) - The most recent time at which the device accessed the gateway specified in last_accessed_gateway. - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `{{registry}}/devices/{{name}}` - -* `num_id` - - A server-defined unique numeric ID for the device. - This is a more compact way to identify devices, and it is globally unique. - -* `last_heartbeat_time` - - The last time an MQTT PINGREQ was received. - -* `last_event_time` - - The last time a telemetry event was received. - -* `last_state_time` - - The last time a state event was received. - -* `last_config_ack_time` - - The last time a cloud-to-device config version acknowledgment was received from the device. - -* `last_config_send_time` - - The last time a cloud-to-device config version was sent to the device. - -* `last_error_time` - - The time the most recent error occurred, such as a failure to publish to Cloud Pub/Sub. - -* `last_error_status` - - The error message of the most recent error, such as a failure to publish to Cloud Pub/Sub. - Structure is [documented below](#nested_last_error_status). - -* `config` - - The most recent device configuration, which is eventually sent from Cloud IoT Core to the device. - Structure is [documented below](#nested_config). - -* `state` - - The state most recently received from the device. - Structure is [documented below](#nested_state). - - -The `last_error_status` block contains: - -* `number` - - (Optional) - The status code, which should be an enum value of google.rpc.Code. - -* `message` - - (Optional) - A developer-facing error message, which should be in English. - -* `details` - - (Optional) - A list of messages that carry the error details. - -The `config` block contains: - -* `version` - - (Output) - The version of this update. - -* `cloud_update_time` - - (Output) - The time at which this configuration version was updated in Cloud IoT Core. - -* `device_ack_time` - - (Output) - The time at which Cloud IoT Core received the acknowledgment from the device, - indicating that the device has received this configuration version. - -* `binary_data` - - (Optional) - The device configuration data. - -The `state` block contains: - -* `update_time` - - (Optional) - The time at which this state version was updated in Cloud IoT Core. - -* `binary_data` - - (Optional) - The device state data. - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -Device can be imported using any of these accepted formats: - -``` -$ terraform import google_cloudiot_device.default {{registry}}/devices/{{name}} -``` diff --git a/website/docs/r/cloudiot_registry.html.markdown b/website/docs/r/cloudiot_registry.html.markdown deleted file mode 100644 index dcbe0306ae5..00000000000 --- a/website/docs/r/cloudiot_registry.html.markdown +++ /dev/null @@ -1,225 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Cloud IoT Core" -description: |- - A Google Cloud IoT Core device registry. ---- - -# google\_cloudiot\_registry -~> **Warning:** `google_cloudiot_registry` is deprecated in the API. This resource will be removed in the next major release of the provider. - -A Google Cloud IoT Core device registry. - - -To get more information about DeviceRegistry, see: - -* [API documentation](https://cloud.google.com/iot/docs/reference/cloudiot/rest/) -* How-to Guides - * [Official Documentation](https://cloud.google.com/iot/docs/) - -## Example Usage - Cloudiot Device Registry Basic - - -```hcl -resource "google_cloudiot_registry" "test-registry" { - name = "cloudiot-registry" -} -``` -## Example Usage - Cloudiot Device Registry Single Event Notification Configs - - -```hcl -resource "google_pubsub_topic" "default-telemetry" { - name = "default-telemetry" -} - -resource "google_cloudiot_registry" "test-registry" { - name = "cloudiot-registry" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - subfolder_matches = "" - } - -} -``` -## Example Usage - Cloudiot Device Registry Full - - -```hcl -resource "google_pubsub_topic" "default-devicestatus" { - name = "default-devicestatus" -} - -resource "google_pubsub_topic" "default-telemetry" { - name = "default-telemetry" -} - -resource "google_pubsub_topic" "additional-telemetry" { - name = "additional-telemetry" -} - -resource "google_cloudiot_registry" "test-registry" { - name = "cloudiot-registry" - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.additional-telemetry.id - subfolder_matches = "test/path" - } - - event_notification_configs { - pubsub_topic_name = google_pubsub_topic.default-telemetry.id - subfolder_matches = "" - } - - state_notification_config = { - pubsub_topic_name = google_pubsub_topic.default-devicestatus.id - } - - mqtt_config = { - mqtt_enabled_state = "MQTT_ENABLED" - } - - http_config = { - http_enabled_state = "HTTP_ENABLED" - } - - log_level = "INFO" - - credentials { - public_key_certificate = { - format = "X509_CERTIFICATE_PEM" - certificate = file("test-fixtures/rsa_cert.pem") - } - } -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `name` - - (Required) - A unique name for the resource, required by device registry. - - -- - - - - -* `event_notification_configs` - - (Optional) - List of configurations for event notifications, such as PubSub topics - to publish device events to. - Structure is [documented below](#nested_event_notification_configs). - -* `log_level` - - (Optional) - The default logging verbosity for activity from devices in this - registry. Specifies which events should be written to logs. For - example, if the LogLevel is ERROR, only events that terminate in - errors will be logged. LogLevel is inclusive; enabling INFO logging - will also enable ERROR logging. - Default value is `NONE`. - Possible values are: `NONE`, `ERROR`, `INFO`, `DEBUG`. - -* `region` - - (Optional) - The region in which the created registry should reside. - If it is not provided, the provider region is used. - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - -* `state_notification_config` - A PubSub topic to publish device state updates. - The structure is documented below. - -* `mqtt_config` - Activate or deactivate MQTT. - The structure is documented below. - -* `http_config` - Activate or deactivate HTTP. - The structure is documented below. - -* `credentials` - List of public key certificates to authenticate devices. - The structure is documented below. - -The `state_notification_config` block supports: - -* `pubsub_topic_name` - PubSub topic name to publish device state updates. - -The `mqtt_config` block supports: - -* `mqtt_enabled_state` - The field allows `MQTT_ENABLED` or `MQTT_DISABLED`. - -The `http_config` block supports: - -* `http_enabled_state` - The field allows `HTTP_ENABLED` or `HTTP_DISABLED`. - -The `credentials` block supports: - -* `public_key_certificate` - A public key certificate format and data. - -The `public_key_certificate` block supports: - -* `format` - The field allows only `X509_CERTIFICATE_PEM`. - -* `certificate` - The certificate data. - -The `event_notification_configs` block supports: - -* `subfolder_matches` - - (Optional) - If the subfolder name matches this string exactly, this - configuration will be used. The string must not include the - leading '/' character. If empty, all strings are matched. Empty - value can only be used for the last `event_notification_configs` - item. - -* `pubsub_topic_name` - - (Required) - PubSub topic name to publish device events. - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{region}}/registries/{{name}}` - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -DeviceRegistry can be imported using any of these accepted formats: - -``` -$ terraform import google_cloudiot_registry.default {{project}}/locations/{{region}}/registries/{{name}} -$ terraform import google_cloudiot_registry.default {{project}}/{{region}}/{{name}} -$ terraform import google_cloudiot_registry.default {{region}}/{{name}} -$ terraform import google_cloudiot_registry.default {{name}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/cloudiot_registry_iam.html.markdown b/website/docs/r/cloudiot_registry_iam.html.markdown deleted file mode 100644 index 01d6d7269ca..00000000000 --- a/website/docs/r/cloudiot_registry_iam.html.markdown +++ /dev/null @@ -1,158 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Cloud IoT Core" -description: |- - Collection of resources to manage IAM policy for Cloud IoT Core DeviceRegistry ---- - -# IAM policy for Cloud IoT Core DeviceRegistry -Three different resources help you manage your IAM policy for Cloud IoT Core DeviceRegistry. Each of these resources serves a different use case: - -* `google_cloudiot_registry_iam_policy`: Authoritative. Sets the IAM policy for the deviceregistry and replaces any existing policy already attached. -* `google_cloudiot_registry_iam_binding`: Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the deviceregistry are preserved. -* `google_cloudiot_registry_iam_member`: Non-authoritative. Updates the IAM policy to grant a role to a new member. Other members for the role for the deviceregistry are preserved. - -A data source can be used to retrieve policy data in advent you do not need creation - -* `google_cloudiot_registry_iam_policy`: Retrieves the IAM policy for the deviceregistry - -~> **Note:** `google_cloudiot_registry_iam_policy` **cannot** be used in conjunction with `google_cloudiot_registry_iam_binding` and `google_cloudiot_registry_iam_member` or they will fight over what your policy should be. - -~> **Note:** `google_cloudiot_registry_iam_binding` resources **can be** used in conjunction with `google_cloudiot_registry_iam_member` resources **only if** they do not grant privilege to the same role. - - - - -## google\_cloudiot\_registry\_iam\_policy - -```hcl -data "google_iam_policy" "admin" { - binding { - role = "roles/viewer" - members = [ - "user:jane@example.com", - ] - } -} - -resource "google_cloudiot_registry_iam_policy" "policy" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - policy_data = data.google_iam_policy.admin.policy_data -} -``` - -## google\_cloudiot\_registry\_iam\_binding - -```hcl -resource "google_cloudiot_registry_iam_binding" "binding" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - role = "roles/viewer" - members = [ - "user:jane@example.com", - ] -} -``` - -## google\_cloudiot\_registry\_iam\_member - -```hcl -resource "google_cloudiot_registry_iam_member" "member" { - project = google_cloudiot_registry.test-registry.project - region = google_cloudiot_registry.test-registry.region - name = google_cloudiot_registry.test-registry.name - role = "roles/viewer" - member = "user:jane@example.com" -} -``` - - -## Argument Reference - -The following arguments are supported: - -* `name` - (Required) Used to find the parent resource to bind the IAM policy to -* `region` - (Optional) The region in which the created registry should reside. -If it is not provided, the provider region is used. - Used to find the parent resource to bind the IAM policy to. If not specified, - the value will be parsed from the identifier of the parent resource. If no region is provided in the parent identifier and no - region is specified, it is taken from the provider configuration. - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the project will be parsed from the identifier of the parent resource. If no project is provided in the parent identifier and no project is specified, the provider project is used. - -* `member/members` - (Required) Identities that will be granted the privilege in `role`. - Each entry can have one of the following values: - * **allUsers**: A special identifier that represents anyone who is on the internet; with or without a Google account. - * **allAuthenticatedUsers**: A special identifier that represents anyone who is authenticated with a Google account or a service account. - * **user:{emailid}**: An email address that represents a specific Google account. For example, alice@gmail.com or joe@example.com. - * **serviceAccount:{emailid}**: An email address that represents a service account. For example, my-other-app@appspot.gserviceaccount.com. - * **group:{emailid}**: An email address that represents a Google group. For example, admins@example.com. - * **domain:{domain}**: A G Suite domain (primary, instead of alias) name that represents all the users of that domain. For example, google.com or example.com. - * **projectOwner:projectid**: Owners of the given project. For example, "projectOwner:my-example-project" - * **projectEditor:projectid**: Editors of the given project. For example, "projectEditor:my-example-project" - * **projectViewer:projectid**: Viewers of the given project. For example, "projectViewer:my-example-project" - -* `role` - (Required) The role that should be applied. Only one - `google_cloudiot_registry_iam_binding` can be used per role. Note that custom roles must be of the format - `[projects|organizations]/{parent-name}/roles/{role-name}`. - -* `policy_data` - (Required only by `google_cloudiot_registry_iam_policy`) The policy data generated by - a `google_iam_policy` data source. - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are -exported: - -* `etag` - (Computed) The etag of the IAM policy. - -## Import - -For all import syntaxes, the "resource in question" can take any of the following forms: - -* projects/{{project}}/locations/{{location}}/registries/{{name}} -* {{project}}/{{location}}/{{name}} -* {{location}}/{{name}} -* {{name}} - -Any variables not passed in the import command will be taken from the provider configuration. - -Cloud IoT Core deviceregistry IAM resources can be imported using the resource identifiers, role, and member. - -IAM member imports use space-delimited identifiers: the resource in question, the role, and the member identity, e.g. -``` -$ terraform import google_cloudiot_registry_iam_member.editor "projects/{{project}}/locations/{{location}}/registries/{{device_registry}} roles/viewer user:jane@example.com" -``` - -IAM binding imports use space-delimited identifiers: the resource in question and the role, e.g. -``` -$ terraform import google_cloudiot_registry_iam_binding.editor "projects/{{project}}/locations/{{location}}/registries/{{device_registry}} roles/viewer" -``` - -IAM policy imports use the identifier of the resource in question, e.g. -``` -$ terraform import google_cloudiot_registry_iam_policy.editor projects/{{project}}/locations/{{location}}/registries/{{device_registry}} -``` - --> **Custom Roles**: If you're importing a IAM resource with a custom role, make sure to use the - full name of the custom role, e.g. `[projects/my-project|organizations/my-org]/roles/my-custom-role`. - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/composer_environment.html.markdown b/website/docs/r/composer_environment.html.markdown index 438b688d9f7..94c3fb60920 100644 --- a/website/docs/r/composer_environment.html.markdown +++ b/website/docs/r/composer_environment.html.markdown @@ -262,6 +262,15 @@ The following arguments are supported: No more than 64 labels can be associated with a given environment. Both keys and values must be <= 128 bytes in size. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `region` - (Optional) The location or Compute Engine region for the environment. diff --git a/website/docs/r/compute_address.html.markdown b/website/docs/r/compute_address.html.markdown index 37d5bf29a68..2bbca9a6a0b 100644 --- a/website/docs/r/compute_address.html.markdown +++ b/website/docs/r/compute_address.html.markdown @@ -228,6 +228,9 @@ The following arguments are supported: (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Labels to apply to this address. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `network` - (Optional) The URL of the network in which to reserve the address. This field @@ -275,6 +278,15 @@ In addition to the arguments listed above, the following computed attributes are ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) The fingerprint used for optimistic locking of this resource. Used internally during updates. + +* `terraform_labels` - + ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_disk.html.markdown b/website/docs/r/compute_disk.html.markdown index db249c24179..ec241b9d5bd 100644 --- a/website/docs/r/compute_disk.html.markdown +++ b/website/docs/r/compute_disk.html.markdown @@ -159,6 +159,9 @@ The following arguments are supported: (Optional) Labels to apply to this disk. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `size` - (Optional) Size of the persistent disk, specified in GB. You can specify this @@ -429,6 +432,13 @@ In addition to the arguments listed above, the following computed attributes are be used to determine whether the image was taken from the current or a previous instance of a given disk name. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `source_image_id` - The ID value of the image used to create this disk. This value identifies the exact image that was used to create this persistent diff --git a/website/docs/r/compute_external_vpn_gateway.html.markdown b/website/docs/r/compute_external_vpn_gateway.html.markdown index 6726e6d3251..1ecc294ba3e 100644 --- a/website/docs/r/compute_external_vpn_gateway.html.markdown +++ b/website/docs/r/compute_external_vpn_gateway.html.markdown @@ -164,6 +164,8 @@ The following arguments are supported: * `labels` - (Optional) Labels for the external VPN gateway resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `redundancy_type` - (Optional) @@ -205,6 +207,13 @@ In addition to the arguments listed above, the following computed attributes are * `label_fingerprint` - The fingerprint used for optimistic locking of this resource. Used internally during updates. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_forwarding_rule.html.markdown b/website/docs/r/compute_forwarding_rule.html.markdown index 5c0d41be004..cfc1992bc0a 100644 --- a/website/docs/r/compute_forwarding_rule.html.markdown +++ b/website/docs/r/compute_forwarding_rule.html.markdown @@ -1493,6 +1493,9 @@ The following arguments are supported: (Optional) Labels to apply to this forwarding rule. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `all_ports` - (Optional) This field can only be used: @@ -1603,6 +1606,13 @@ In addition to the arguments listed above, the following computed attributes are * `base_forwarding_rule` - [Output Only] The URL for the corresponding base Forwarding Rule. By base Forwarding Rule, we mean the Forwarding Rule that has the same IP address, protocol, and port settings with the current Forwarding Rule, but without sourceIPRanges specified. Always empty if the current Forwarding Rule does not have sourceIPRanges specified. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_global_address.html.markdown b/website/docs/r/compute_global_address.html.markdown index 13d6f666b4b..26b1b08152f 100644 --- a/website/docs/r/compute_global_address.html.markdown +++ b/website/docs/r/compute_global_address.html.markdown @@ -100,6 +100,9 @@ The following arguments are supported: (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Labels to apply to this address. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `ip_version` - (Optional) The IP Version that will be used by this address. The default value is `IPV4`. @@ -150,6 +153,15 @@ In addition to the arguments listed above, the following computed attributes are ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) The fingerprint used for optimistic locking of this resource. Used internally during updates. + +* `terraform_labels` - + ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_global_forwarding_rule.html.markdown b/website/docs/r/compute_global_forwarding_rule.html.markdown index 5fb5d8e9261..0246fa17c18 100644 --- a/website/docs/r/compute_global_forwarding_rule.html.markdown +++ b/website/docs/r/compute_global_forwarding_rule.html.markdown @@ -1279,6 +1279,9 @@ The following arguments are supported: (Optional) Labels to apply to this forwarding rule. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `load_balancing_scheme` - (Optional) Specifies the forwarding rule type. @@ -1410,6 +1413,13 @@ In addition to the arguments listed above, the following computed attributes are * `base_forwarding_rule` - [Output Only] The URL for the corresponding base Forwarding Rule. By base Forwarding Rule, we mean the Forwarding Rule that has the same IP address, protocol, and port settings with the current Forwarding Rule, but without sourceIPRanges specified. Always empty if the current Forwarding Rule does not have sourceIPRanges specified. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_image.html.markdown b/website/docs/r/compute_image.html.markdown index f22556eede3..f63334aabf4 100644 --- a/website/docs/r/compute_image.html.markdown +++ b/website/docs/r/compute_image.html.markdown @@ -163,6 +163,8 @@ The following arguments are supported: * `labels` - (Optional) Labels to apply to this Image. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `licenses` - (Optional) @@ -260,6 +262,13 @@ In addition to the arguments listed above, the following computed attributes are * `label_fingerprint` - The fingerprint used for optimistic locking of this resource. Used internally during updates. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_instance.html.markdown b/website/docs/r/compute_instance.html.markdown index 00e16e34114..7ad77392768 100644 --- a/website/docs/r/compute_instance.html.markdown +++ b/website/docs/r/compute_instance.html.markdown @@ -117,6 +117,14 @@ The following arguments are supported: For more details about this behavior, see [this section](https://www.terraform.io/docs/configuration/attr-as-blocks.html#defining-a-fixed-object-collection-value). * `labels` - (Optional) A map of key/value label pairs to assign to the instance. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `metadata` - (Optional) Metadata key/value pairs to make available from within the instance. Ssh keys attached in the Cloud Console will be removed. diff --git a/website/docs/r/compute_instance_group_named_port.html.markdown b/website/docs/r/compute_instance_group_named_port.html.markdown index 124ad04738c..b1597c2945a 100644 --- a/website/docs/r/compute_instance_group_named_port.html.markdown +++ b/website/docs/r/compute_instance_group_named_port.html.markdown @@ -81,6 +81,7 @@ resource "google_container_cluster" "my_cluster" { cluster_ipv4_cidr_block = "/19" services_ipv4_cidr_block = "/22" } + deletion_protection = "true" } ``` diff --git a/website/docs/r/compute_instance_template.html.markdown b/website/docs/r/compute_instance_template.html.markdown index c04c64137d4..a1c450d39cc 100644 --- a/website/docs/r/compute_instance_template.html.markdown +++ b/website/docs/r/compute_instance_template.html.markdown @@ -308,6 +308,15 @@ The following arguments are supported: * `labels` - (Optional) A set of key/value label pairs to assign to instances created from this template. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `metadata` - (Optional) Metadata key/value pairs to make available from within instances created from this template. diff --git a/website/docs/r/compute_network_peering_routes_config.html.markdown b/website/docs/r/compute_network_peering_routes_config.html.markdown index f305db2af28..f675a7fb198 100644 --- a/website/docs/r/compute_network_peering_routes_config.html.markdown +++ b/website/docs/r/compute_network_peering_routes_config.html.markdown @@ -134,6 +134,7 @@ resource "google_container_cluster" "private_cluster" { cluster_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.container_subnetwork.secondary_ip_range[1].range_name } + deletion_protection = "true" } ``` diff --git a/website/docs/r/compute_node_group.html.markdown b/website/docs/r/compute_node_group.html.markdown index 1170c1226e2..5becf6a067e 100644 --- a/website/docs/r/compute_node_group.html.markdown +++ b/website/docs/r/compute_node_group.html.markdown @@ -28,11 +28,6 @@ To get more information about NodeGroup, see: * How-to Guides * [Sole-Tenant Nodes](https://cloud.google.com/compute/docs/nodes/) -~> **Warning:** Due to limitations of the API, Terraform cannot update the -number of nodes in a node group and changes to node group size either -through Terraform config or through external changes will cause -Terraform to delete and recreate the node group. -
Open in Cloud Shell @@ -53,7 +48,7 @@ resource "google_compute_node_group" "nodes" { zone = "us-central1-f" description = "example google_compute_node_group for Terraform Google Provider" - size = 1 + initial_size = 1 node_template = google_compute_node_template.soletenant-tmpl.id } ``` @@ -110,7 +105,7 @@ resource "google_compute_node_group" "nodes" { zone = "us-central1-f" description = "example google_compute_node_group for Terraform Google Provider" - size = 1 + initial_size = 1 node_template = google_compute_node_template.soletenant-tmpl.id share_settings { @@ -144,13 +139,9 @@ The following arguments are supported: (Optional) Name of the resource. -* `size` - - (Optional) - The total number of nodes in the node group. One of `initial_size` or `size` must be specified. - * `initial_size` - (Optional) - The initial number of nodes in the node group. One of `initial_size` or `size` must be specified. + The initial number of nodes in the node group. One of `initial_size` or `autoscaling_policy` must be configured on resource creation. * `maintenance_policy` - (Optional) @@ -165,6 +156,7 @@ The following arguments are supported: (Optional) If you use sole-tenant nodes for your workloads, you can use the node group autoscaler to automatically manage the sizes of your node groups. + One of `initial_size` or `autoscaling_policy` must be configured on resource creation. Structure is [documented below](#nested_autoscaling_policy). * `share_settings` - @@ -237,6 +229,9 @@ In addition to the arguments listed above, the following computed attributes are * `creation_timestamp` - Creation timestamp in RFC3339 text format. + +* `size` - + The total number of nodes in the node group. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_region_disk.html.markdown b/website/docs/r/compute_region_disk.html.markdown index 2ae64adbf6d..e24043dd78c 100644 --- a/website/docs/r/compute_region_disk.html.markdown +++ b/website/docs/r/compute_region_disk.html.markdown @@ -176,6 +176,9 @@ The following arguments are supported: (Optional) Labels to apply to this disk. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `size` - (Optional) Size of the persistent disk, specified in GB. You can specify this @@ -343,6 +346,13 @@ In addition to the arguments listed above, the following computed attributes are be used to determine whether the image was taken from the current or a previous instance of a given disk name. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `source_snapshot_id` - The unique ID of the snapshot used to create this disk. This value identifies the exact snapshot that was used to create this persistent diff --git a/website/docs/r/compute_region_instance_template.html.markdown b/website/docs/r/compute_region_instance_template.html.markdown index 4e1c3a29838..5364b319711 100644 --- a/website/docs/r/compute_region_instance_template.html.markdown +++ b/website/docs/r/compute_region_instance_template.html.markdown @@ -320,6 +320,15 @@ The following arguments are supported: * `labels` - (Optional) A set of key/value label pairs to assign to instances created from this template. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `metadata` - (Optional) Metadata key/value pairs to make available from within instances created from this template. diff --git a/website/docs/r/compute_router_nat.html.markdown b/website/docs/r/compute_router_nat.html.markdown index ca19a076a31..ff73e607741 100644 --- a/website/docs/r/compute_router_nat.html.markdown +++ b/website/docs/r/compute_router_nat.html.markdown @@ -358,8 +358,8 @@ The following arguments are supported: * `enable_endpoint_independent_mapping` - (Optional) - Specifies if endpoint independent mapping is enabled. This is enabled by default. For more information - see the [official documentation](https://cloud.google.com/nat/docs/overview#specs-rfcs). + Enable endpoint independent mapping. + For more information see the [official documentation](https://cloud.google.com/nat/docs/overview#specs-rfcs). * `type` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) diff --git a/website/docs/r/compute_service_attachment.html.markdown b/website/docs/r/compute_service_attachment.html.markdown index 844bb02a126..aef81732768 100644 --- a/website/docs/r/compute_service_attachment.html.markdown +++ b/website/docs/r/compute_service_attachment.html.markdown @@ -359,7 +359,6 @@ The following arguments are supported: This flag determines whether a consumer accept/reject list change can reconcile the statuses of existing ACCEPTED or REJECTED PSC endpoints. If false, connection policy update will only affect existing PENDING PSC endpoints. Existing ACCEPTED/REJECTED endpoints will remain untouched regardless how the connection policy is modified . If true, update will affect both PENDING and ACCEPTED/REJECTED PSC endpoints. For example, an ACCEPTED PSC endpoint will be moved to REJECTED if its project is added to the reject list. - For newly created service attachment, this boolean defaults to true. * `region` - (Optional) diff --git a/website/docs/r/compute_snapshot.html.markdown b/website/docs/r/compute_snapshot.html.markdown index fed6e3ff94c..ea940a347c4 100644 --- a/website/docs/r/compute_snapshot.html.markdown +++ b/website/docs/r/compute_snapshot.html.markdown @@ -152,6 +152,8 @@ The following arguments are supported: * `labels` - (Optional) Labels to apply to this Snapshot. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `zone` - (Optional) @@ -246,6 +248,13 @@ In addition to the arguments listed above, the following computed attributes are * `label_fingerprint` - The fingerprint used for optimistic locking of this resource. Used internally during updates. + +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/compute_vpn_tunnel.html.markdown b/website/docs/r/compute_vpn_tunnel.html.markdown index 3adcfa52c95..398b7a01da6 100644 --- a/website/docs/r/compute_vpn_tunnel.html.markdown +++ b/website/docs/r/compute_vpn_tunnel.html.markdown @@ -278,6 +278,8 @@ The following arguments are supported: * `labels` - (Optional, [Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) Labels to apply to this VpnTunnel. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `region` - (Optional) @@ -309,6 +311,15 @@ In addition to the arguments listed above, the following computed attributes are * `detailed_status` - Detailed status message for the VPN tunnel. + +* `terraform_labels` - + ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + ([Beta](https://terraform.io/docs/providers/google/guides/provider_versions.html)) + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `self_link` - The URI of the created resource. diff --git a/website/docs/r/container_attached_cluster.html.markdown b/website/docs/r/container_attached_cluster.html.markdown index 76b143aff51..7a62c9cfb17 100644 --- a/website/docs/r/container_attached_cluster.html.markdown +++ b/website/docs/r/container_attached_cluster.html.markdown @@ -338,6 +338,9 @@ In addition to the arguments listed above, the following computed attributes are A set of errors found in the cluster. Structure is [documented below](#nested_errors). +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `workload_identity_config` block contains: diff --git a/website/docs/r/container_aws_cluster.html.markdown b/website/docs/r/container_aws_cluster.html.markdown index 010474d2f06..84951558915 100644 --- a/website/docs/r/container_aws_cluster.html.markdown +++ b/website/docs/r/container_aws_cluster.html.markdown @@ -462,6 +462,12 @@ The `networking` block supports: * `annotations` - (Optional) Optional. Annotations on the cluster. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Key can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. + +* `binary_authorization` - + (Optional) + Configuration options for the Binary Authorization feature. * `description` - (Optional) @@ -477,6 +483,12 @@ The `networking` block supports: +The `binary_authorization` block supports: + +* `evaluation_mode` - + (Optional) + Mode of operation for Binary Authorization policy evaluation. Possible values: DISABLED, PROJECT_SINGLETON_POLICY_ENFORCE + The `instance_placement` block supports: * `tenancy` - @@ -564,6 +576,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time at which this cluster was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + * `endpoint` - Output only. The endpoint of the cluster's API server. diff --git a/website/docs/r/container_aws_node_pool.html.markdown b/website/docs/r/container_aws_node_pool.html.markdown index 262ad6686d5..0ceba8ab2f7 100644 --- a/website/docs/r/container_aws_node_pool.html.markdown +++ b/website/docs/r/container_aws_node_pool.html.markdown @@ -629,6 +629,8 @@ The `max_pods_constraint` block supports: * `annotations` - (Optional) Optional. Annotations on the node pool. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Key can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `management` - (Optional) @@ -638,6 +640,10 @@ The `max_pods_constraint` block supports: (Optional) The project for the resource +* `update_settings` - + (Optional) + (Beta only) Optional. Update settings control the speed and disruption of the node pool update. + The `autoscaling_metrics_collection` block supports: @@ -720,6 +726,22 @@ The `management` block supports: (Optional) Optional. Whether or not the nodes will be automatically repaired. +The `update_settings` block supports: + +* `surge_settings` - + (Optional) + Optional. Settings for surge update. + +The `surge_settings` block supports: + +* `max_surge` - + (Optional) + Optional. The maximum number of nodes that can be created beyond the current size of the node pool during the update process. + +* `max_unavailable` - + (Optional) + Optional. The maximum number of nodes that can be simultaneously unavailable during the update process. A node is considered unavailable if its status is not Ready. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: @@ -729,6 +751,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time at which this node pool was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + * `etag` - Allows clients to perform consistent read-modify-writes through optimistic concurrency control. May be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. diff --git a/website/docs/r/container_azure_cluster.html.markdown b/website/docs/r/container_azure_cluster.html.markdown index b05bed6ebd7..8e5a464d347 100644 --- a/website/docs/r/container_azure_cluster.html.markdown +++ b/website/docs/r/container_azure_cluster.html.markdown @@ -269,6 +269,8 @@ The `networking` block supports: * `annotations` - (Optional) Optional. Annotations on the cluster. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Keys can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `azure_services_authentication` - (Optional) @@ -361,6 +363,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time at which this cluster was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + * `endpoint` - Output only. The endpoint of the cluster's API server. diff --git a/website/docs/r/container_azure_node_pool.html.markdown b/website/docs/r/container_azure_node_pool.html.markdown index 7d9150ea966..0b4046855a8 100644 --- a/website/docs/r/container_azure_node_pool.html.markdown +++ b/website/docs/r/container_azure_node_pool.html.markdown @@ -220,6 +220,8 @@ The `max_pods_constraint` block supports: * `annotations` - (Optional) Optional. Annotations on the node pool. This field has the same restrictions as Kubernetes annotations. The total size of all keys and values combined is limited to 256k. Keys can have 2 segments: prefix (optional) and name (required), separated by a slash (/). Prefix must be a DNS subdomain. Name must be 63 characters or less, begin and end with alphanumerics, with dashes (-), underscores (_), dots (.), and alphanumerics between. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `azure_availability_zone` - (Optional) @@ -266,6 +268,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time at which this node pool was created. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + * `etag` - Allows clients to perform consistent read-modify-writes through optimistic concurrency control. May be sent on update and delete requests to ensure the client has an up-to-date value before proceeding. diff --git a/website/docs/r/container_cluster.html.markdown b/website/docs/r/container_cluster.html.markdown index cc08f95c240..2ad540829e0 100644 --- a/website/docs/r/container_cluster.html.markdown +++ b/website/docs/r/container_cluster.html.markdown @@ -16,6 +16,10 @@ Manages a Google Kubernetes Engine (GKE) cluster. For more information see [the official documentation](https://cloud.google.com/container-engine/docs/clusters) and [the API reference](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters). +-> **Note**: On version 5.0.0+ of the provider, you must explicitly set `deletion_protection=false` +(and run `terraform apply` to write the field to state) in order to destroy a cluster. +It is recommended to not set this field (or set it to true) until you're ready to destroy. + ~> **Warning:** All arguments and attributes, including basic auth username and passwords as well as certificate outputs will be stored in the raw state as plaintext. [Read more about sensitive data in state](https://www.terraform.io/language/state/sensitive-data). @@ -118,6 +122,10 @@ locations. In contrast, in a regional cluster, cluster master nodes are present in multiple zones in the region. For that reason, regional clusters should be preferred. +* `deletion_protection` - (Optional) Whether or not to allow Terraform to destroy +the cluster. Unless this field is set to false in Terraform state, a +`terraform destroy` or `terraform apply` that would delete the cluster will fail. + * `addons_config` - (Optional) The configuration for addons supported by GKE. Structure is [documented below](#nested_addons_config). @@ -156,10 +164,6 @@ per node in this cluster. This doesn't work on "routes-based" clusters, clusters that don't have IP Aliasing enabled. See the [official documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr) for more information. -* `enable_binary_authorization` - (DEPRECATED) Enable Binary Authorization for this cluster. - If enabled, all container images will be validated by Google Binary Authorization. - Deprecated in favor of `binary_authorization`. - * `enable_kubernetes_alpha` - (Optional) Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days. @@ -460,8 +464,7 @@ addons_config { * `enabled` - (DEPRECATED) Enable Binary Authorization for this cluster. Deprecated in favor of `evaluation_mode`. * `evaluation_mode` - (Optional) Mode of operation for Binary Authorization policy evaluation. Valid values are `DISABLED` - and `PROJECT_SINGLETON_POLICY_ENFORCE`. `PROJECT_SINGLETON_POLICY_ENFORCE` is functionally equivalent to the - deprecated `enable_binary_authorization` parameter being set to `true`. + and `PROJECT_SINGLETON_POLICY_ENFORCE`. The `service_external_ips_config` block supports: @@ -895,14 +898,13 @@ gvnic { * `tags` - (Optional) The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls. -* `taint` - (Optional) A list of [Kubernetes taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) -to apply to nodes. GKE's API can only set this field on cluster creation. -However, GKE will add taints to your nodes if you enable certain features such -as GPUs. If this field is set, any diffs on this field will cause Terraform to -recreate the underlying resource. Taint values can be updated safely in -Kubernetes (eg. through `kubectl`), and it's recommended that you do not use -this field to manage taints. If you do, `lifecycle.ignore_changes` is -recommended. Structure is [documented below](#nested_taint). +* `taint` - (Optional) A list of +[Kubernetes taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) +to apply to nodes. This field will only report drift on taint keys that are +already managed with Terraform, use `effective_taints` to view the list of +GKE-managed taints on the node pool from all sources. Importing this resource +will not record any taints as being Terraform-managed, and will cause drift with +any configured taints. Structure is [documented below](#nested_taint). * `workload_metadata_config` - (Optional) Metadata configuration to expose to workloads on the node pool. Structure is [documented below](#nested_workload_metadata_config). @@ -1317,6 +1319,8 @@ exported: * `cluster_autoscaling.0.auto_provisioning_defaults.0.management.0.upgrade_options` - Specifies the [Auto Upgrade knobs](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/NodeManagement#AutoUpgradeOptions) for the node pool. +* `node_config.0.effective_taints` - List of kubernetes taints applied to each node. Structure is [documented above](#nested_taint). + ## Timeouts This resource provides the following diff --git a/website/docs/r/container_node_pool.html.markdown b/website/docs/r/container_node_pool.html.markdown index 523d2684a41..bbf814662a8 100644 --- a/website/docs/r/container_node_pool.html.markdown +++ b/website/docs/r/container_node_pool.html.markdown @@ -201,9 +201,9 @@ cluster. The `management` block supports: -* `auto_repair` - (Optional) Whether the nodes will be automatically repaired. +* `auto_repair` - (Optional) Whether the nodes will be automatically repaired. Enabled by default. -* `auto_upgrade` - (Optional) Whether the nodes will be automatically upgraded. +* `auto_upgrade` - (Optional) Whether the nodes will be automatically upgraded. Enabled by default. The `network_config` block supports: diff --git a/website/docs/r/data_fusion_instance.html.markdown b/website/docs/r/data_fusion_instance.html.markdown index f7e056e8e81..aa621807fbd 100644 --- a/website/docs/r/data_fusion_instance.html.markdown +++ b/website/docs/r/data_fusion_instance.html.markdown @@ -243,6 +243,9 @@ The following arguments are supported: The resource labels for instance to use to annotate any related underlying resources, such as Compute Engine VMs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `options` - (Optional) Map of additional options used to configure the behavior of Data Fusion instance. @@ -385,6 +388,13 @@ In addition to the arguments listed above, the following computed attributes are * `p4_service_account` - P4 service account for the customer project. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/database_migration_service_connection_profile.html.markdown b/website/docs/r/database_migration_service_connection_profile.html.markdown index 7ae2529eb19..5346d60b5b5 100644 --- a/website/docs/r/database_migration_service_connection_profile.html.markdown +++ b/website/docs/r/database_migration_service_connection_profile.html.markdown @@ -183,11 +183,6 @@ resource "google_database_migration_service_connection_profile" "postgresprofile depends_on = [google_sql_user.sqldb_user] } ``` - ## Example Usage - Database Migration Service Connection Profile Alloydb @@ -195,7 +190,7 @@ resource "google_database_migration_service_connection_profile" "postgresprofile data "google_project" "project" { } -data "google_compute_network" "default" { +resource "google_compute_network" "default" { name = "vpc-network" } @@ -204,11 +199,11 @@ resource "google_compute_global_address" "private_ip_alloc" { address_type = "INTERNAL" purpose = "VPC_PEERING" prefix_length = 16 - network = data.google_compute_network.default.id + network = google_compute_network.default.id } resource "google_service_networking_connection" "vpc_connection" { - network = data.google_compute_network.default.id + network = google_compute_network.default.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name] } @@ -228,7 +223,7 @@ resource "google_database_migration_service_connection_profile" "alloydbprofile" user = "alloyuser%{random_suffix}" password = "alloypass%{random_suffix}" } - vpc_network = data.google_compute_network.default.id + vpc_network = google_compute_network.default.id labels = { alloyfoo = "alloybar" } @@ -271,6 +266,9 @@ The following arguments are supported: (Optional) The resource labels for connection profile to use to annotate any related underlying resources such as Compute Engine VMs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `mysql` - (Optional) Specifies connection parameters required specifically for MySQL databases. @@ -658,6 +656,13 @@ In addition to the arguments listed above, the following computed attributes are * `dbprovider` - The database provider. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `error` block contains: diff --git a/website/docs/r/dataflow_flex_template_job.html.markdown b/website/docs/r/dataflow_flex_template_job.html.markdown index 4064c913467..d592b3fdb39 100644 --- a/website/docs/r/dataflow_flex_template_job.html.markdown +++ b/website/docs/r/dataflow_flex_template_job.html.markdown @@ -98,9 +98,13 @@ such as `serviceAccount`, `workerMachineType`, etc can be specified here. * `labels` - (Optional) User labels to be specified for the job. Keys and values should follow the restrictions specified in the [labeling restrictions](https://cloud.google.com/compute/docs/labeling-resources#restrictions) page. -**NOTE**: Google-provided Dataflow templates often provide default labels -that begin with `goog-dataflow-provided`. Unless explicitly set in config, these -labels will be ignored to prevent diffs on re-apply. +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `on_delete` - (Optional) One of "drain" or "cancel". Specifies behavior of deletion during `terraform destroy`. See above note. diff --git a/website/docs/r/dataflow_job.html.markdown b/website/docs/r/dataflow_job.html.markdown index 6000b964995..a3acbcfcaa7 100644 --- a/website/docs/r/dataflow_job.html.markdown +++ b/website/docs/r/dataflow_job.html.markdown @@ -102,8 +102,11 @@ The following arguments are supported: * `parameters` - (Optional) Key/Value pairs to be passed to the Dataflow job (as used in the template). * `labels` - (Optional) User labels to be specified for the job. Keys and values should follow the restrictions specified in the [labeling restrictions](https://cloud.google.com/compute/docs/labeling-resources#restrictions) page. - **NOTE**: Google-provided Dataflow templates often provide default labels that begin with `goog-dataflow-provided`. - Unless explicitly set in config, these labels will be ignored to prevent diffs on re-apply. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `transform_name_mapping` - (Optional) Only applicable when updating a pipeline. Map of transform name prefixes of the job to be replaced with the corresponding name prefixes of the new job. This field is not used outside of update. * `max_workers` - (Optional) The number of workers permitted to work on the job. More workers may improve processing speed at additional cost. * `on_delete` - (Optional) One of "drain" or "cancel". Specifies behavior of deletion during `terraform destroy`. See above note. diff --git a/website/docs/r/dataplex_asset.html.markdown b/website/docs/r/dataplex_asset.html.markdown index a223f60e1a5..6beda74fdda 100644 --- a/website/docs/r/dataplex_asset.html.markdown +++ b/website/docs/r/dataplex_asset.html.markdown @@ -78,6 +78,12 @@ resource "google_dataplex_asset" "primary" { name = "projects/my-project-name/buckets/bucket" type = "STORAGE_BUCKET" } + + labels = { + env = "foo" + my-asset = "exists" + } + project = "my-project-name" depends_on = [ @@ -169,6 +175,8 @@ The `resource_spec` block supports: * `labels` - (Optional) Optional. User defined labels for the asset. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) @@ -216,6 +224,9 @@ In addition to the arguments listed above, the following computed attributes are * `discovery_status` - Output only. Status of the discovery feature applied to data referenced by this asset. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `resource_status` - Output only. Status of the resource referenced by this asset. @@ -225,6 +236,9 @@ In addition to the arguments listed above, the following computed attributes are * `state` - Output only. Current state of the asset. Possible values: STATE_UNSPECIFIED, ACTIVE, CREATING, DELETING, ACTION_REQUIRED +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `uid` - Output only. System generated globally unique ID for the asset. This ID will be different if the asset is deleted and re-created with the same name. diff --git a/website/docs/r/dataplex_datascan.html.markdown b/website/docs/r/dataplex_datascan.html.markdown index 93c16b11875..fdbfb47e5a3 100644 --- a/website/docs/r/dataplex_datascan.html.markdown +++ b/website/docs/r/dataplex_datascan.html.markdown @@ -326,6 +326,9 @@ The following arguments are supported: (Optional) User-defined labels for the scan. A list of key->value pairs. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `data_quality_spec` - (Optional) DataQualityScan related setting. @@ -608,19 +611,12 @@ In addition to the arguments listed above, the following computed attributes are * `type` - The type of DataScan. -* `data_quality_result` - - (Deprecated) - The result of the data quality scan. - Structure is [documented below](#nested_data_quality_result). - - ~> **Warning:** `data_quality_result` is deprecated and will be removed in a future major release. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. -* `data_profile_result` - - (Deprecated) - The result of the data profile scan. - Structure is [documented below](#nested_data_profile_result). - - ~> **Warning:** `data_profile_result` is deprecated and will be removed in a future major release. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. The `execution_status` block contains: @@ -633,392 +629,6 @@ In addition to the arguments listed above, the following computed attributes are (Output) The time when the latest DataScanJob ended. -The `data_quality_result` block contains: - -* `passed` - - (Output) - Overall data quality result -- true if all rules passed. - -* `dimensions` - - (Optional) - A list of results at the dimension level. - Structure is [documented below](#nested_dimensions). - -* `rules` - - (Output) - A list of all the rules in a job, and their results. - Structure is [documented below](#nested_rules). - -* `row_count` - - (Output) - The count of rows processed. - -* `scanned_data` - - (Output) - The data scanned for this result. - Structure is [documented below](#nested_scanned_data). - - -The `dimensions` block supports: - -* `passed` - - (Optional) - Whether the dimension passed or failed. - -The `rules` block contains: - -* `rule` - - (Output) - The rule specified in the DataQualitySpec, as is. - Structure is [documented below](#nested_rule). - -* `passed` - - (Output) - Whether the rule passed or failed. - -* `evaluated_count` - - (Output) - The number of rows a rule was evaluated against. This field is only valid for ColumnMap type rules. - Evaluated count can be configured to either - 1. include all rows (default) - with null rows automatically failing rule evaluation, or - 2. exclude null rows from the evaluatedCount, by setting ignore_nulls = true. - -* `passed_count` - - (Output) - The number of rows which passed a rule evaluation. This field is only valid for ColumnMap type rules. - -* `null_count` - - (Output) - The number of rows with null values in the specified column. - -* `pass_ratio` - - (Output) - The ratio of passedCount / evaluatedCount. This field is only valid for ColumnMap type rules. - -* `failing_rows_query` - - (Output) - The query to find rows that did not pass this rule. Only applies to ColumnMap and RowCondition rules. - - -The `rule` block contains: - -* `column` - - (Optional) - The unnested column which this rule is evaluated against. - -* `ignore_null` - - (Optional) - Rows with null values will automatically fail a rule, unless ignoreNull is true. In that case, such null rows are trivially considered passing. Only applicable to ColumnMap rules. - -* `dimension` - - (Optional) - The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are ["COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"] - -* `threshold` - - (Optional) - The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of [0.0, 1.0]. 0 indicates default value (i.e. 1.0). - -* `range_expectation` - - (Output) - ColumnMap rule which evaluates whether each column value lies between a specified range. - Structure is [documented below](#nested_range_expectation). - -* `non_null_expectation` - - (Output) - ColumnMap rule which evaluates whether each column value is null. - -* `set_expectation` - - (Output) - ColumnMap rule which evaluates whether each column value is contained by a specified set. - Structure is [documented below](#nested_set_expectation). - -* `regex_expectation` - - (Output) - ColumnMap rule which evaluates whether each column value matches a specified regex. - Structure is [documented below](#nested_regex_expectation). - -* `uniqueness_expectation` - - (Output) - ColumnAggregate rule which evaluates whether the column has duplicates. - -* `statistic_range_expectation` - - (Output) - ColumnAggregate rule which evaluates whether the column aggregate statistic lies between a specified range. - Structure is [documented below](#nested_statistic_range_expectation). - -* `row_condition_expectation` - - (Output) - Table rule which evaluates whether each row passes the specified condition. - Structure is [documented below](#nested_row_condition_expectation). - -* `table_condition_expectation` - - (Output) - Table rule which evaluates whether the provided expression is true. - Structure is [documented below](#nested_table_condition_expectation). - - -The `range_expectation` block contains: - -* `min_value` - - (Optional) - The minimum column value allowed for a row to pass this validation. At least one of minValue and maxValue need to be provided. - -* `max_value` - - (Optional) - The maximum column value allowed for a row to pass this validation. At least one of minValue and maxValue need to be provided. - -* `strict_min_enabled` - - (Optional) - Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed. - Only relevant if a minValue has been defined. Default = false. - -* `strict_max_enabled` - - (Optional) - Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed. - Only relevant if a maxValue has been defined. Default = false. - -The `set_expectation` block contains: - -* `values` - - (Optional) - Expected values for the column value. - -The `regex_expectation` block contains: - -* `regex` - - (Optional) - A regular expression the column value is expected to match. - -The `statistic_range_expectation` block contains: - -* `statistic` - - (Optional) - column statistics. - Possible values are: `STATISTIC_UNDEFINED`, `MEAN`, `MIN`, `MAX`. - -* `min_value` - - (Optional) - The minimum column statistic value allowed for a row to pass this validation. - At least one of minValue and maxValue need to be provided. - -* `max_value` - - (Optional) - The maximum column statistic value allowed for a row to pass this validation. - At least one of minValue and maxValue need to be provided. - -* `strict_min_enabled` - - (Optional) - Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed. - Only relevant if a minValue has been defined. Default = false. - -* `strict_max_enabled` - - (Optional) - Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed. - Only relevant if a maxValue has been defined. Default = false. - -The `row_condition_expectation` block contains: - -* `sql_expression` - - (Optional) - The SQL expression. - -The `table_condition_expectation` block contains: - -* `sql_expression` - - (Optional) - The SQL expression. - -The `scanned_data` block contains: - -* `incremental_field` - - (Optional) - The range denoted by values of an incremental field - Structure is [documented below](#nested_incremental_field). - - -The `incremental_field` block supports: - -* `field` - - (Optional) - The field that contains values which monotonically increases over time (e.g. a timestamp column). - -* `start` - - (Optional) - Value that marks the start of the range. - -* `end` - - (Optional) - Value that marks the end of the range. - -The `data_profile_result` block contains: - -* `row_count` - - (Optional) - The count of rows scanned. - -* `profile` - - (Output) - The profile information per field. - Structure is [documented below](#nested_profile). - -* `scanned_data` - - (Output) - The data scanned for this result. - Structure is [documented below](#nested_scanned_data). - - -The `profile` block contains: - -* `fields` - - (Optional) - List of fields with structural and profile information for each field. - Structure is [documented below](#nested_fields). - - -The `fields` block supports: - -* `name` - - (Optional) - The name of the field. - -* `type` - - (Optional) - The field data type. - -* `mode` - - (Optional) - The mode of the field. Possible values include: - 1. REQUIRED, if it is a required field. - 2. NULLABLE, if it is an optional field. - 3. REPEATED, if it is a repeated field. - -* `profile` - - (Optional) - Profile information for the corresponding field. - Structure is [documented below](#nested_profile). - - -The `profile` block supports: - -* `null_ratio` - - (Output) - Ratio of rows with null value against total scanned rows. - -* `distinct_ratio` - - (Optional) - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode. - -* `top_n_values` - - (Optional) - The list of top N non-null values and number of times they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode. - Structure is [documented below](#nested_top_n_values). - -* `string_profile` - - (Output) - String type field information. - Structure is [documented below](#nested_string_profile). - -* `integer_profile` - - (Output) - Integer type field information. - Structure is [documented below](#nested_integer_profile). - -* `double_profile` - - (Output) - Double type field information. - Structure is [documented below](#nested_double_profile). - - -The `top_n_values` block supports: - -* `value` - - (Optional) - String value of a top N non-null value. - -* `count` - - (Optional) - Count of the corresponding value in the scanned data. - -The `string_profile` block contains: - -* `min_length` - - (Optional) - Minimum length of non-null values in the scanned data. - -* `max_length` - - (Optional) - Maximum length of non-null values in the scanned data. - -* `average_length` - - (Optional) - Average length of non-null values in the scanned data. - -The `integer_profile` block contains: - -* `average` - - (Optional) - Average of non-null values in the scanned data. NaN, if the field has a NaN. - -* `standard_deviation` - - (Optional) - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN. - -* `min` - - (Optional) - Minimum of non-null values in the scanned data. NaN, if the field has a NaN. - -* `quartiles` - - (Optional) - A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3. - -* `max` - - (Optional) - Maximum of non-null values in the scanned data. NaN, if the field has a NaN. - -The `double_profile` block contains: - -* `average` - - (Optional) - Average of non-null values in the scanned data. NaN, if the field has a NaN. - -* `standard_deviation` - - (Optional) - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN. - -* `min` - - (Optional) - Minimum of non-null values in the scanned data. NaN, if the field has a NaN. - -* `quartiles` - - (Optional) - A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3. - -* `max` - - (Optional) - Maximum of non-null values in the scanned data. NaN, if the field has a NaN. - -The `scanned_data` block contains: - -* `incremental_field` - - (Optional) - The range denoted by values of an incremental field - Structure is [documented below](#nested_incremental_field). - - -The `incremental_field` block supports: - -* `field` - - (Optional) - The field that contains values which monotonically increases over time (e.g. a timestamp column). - -* `start` - - (Optional) - Value that marks the start of the range. - -* `end` - - (Optional) - Value that marks the end of the range. - ## Timeouts This resource provides the following diff --git a/website/docs/r/dataplex_lake.html.markdown b/website/docs/r/dataplex_lake.html.markdown index 561a29f4349..f335e8d803d 100644 --- a/website/docs/r/dataplex_lake.html.markdown +++ b/website/docs/r/dataplex_lake.html.markdown @@ -30,12 +30,11 @@ resource "google_dataplex_lake" "primary" { name = "lake" description = "Lake for DCL" display_name = "Lake for DCL" + project = "my-project-name" labels = { my-lake = "exists" } - - project = "my-project-name" } @@ -68,6 +67,8 @@ The following arguments are supported: * `labels` - (Optional) Optional. User-defined labels for the lake. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `metastore` - (Optional) @@ -97,6 +98,9 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time when the lake was created. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `metastore_status` - Output only. Metastore status of the lake. @@ -106,6 +110,9 @@ In addition to the arguments listed above, the following computed attributes are * `state` - Output only. Current state of the lake. Possible values: STATE_UNSPECIFIED, ACTIVE, CREATING, DELETING, ACTION_REQUIRED +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `uid` - Output only. System generated globally unique ID for the lake. This ID will be different if the lake is deleted and re-created with the same name. diff --git a/website/docs/r/dataplex_task.html.markdown b/website/docs/r/dataplex_task.html.markdown index 5a93731ab84..13b3e434238 100644 --- a/website/docs/r/dataplex_task.html.markdown +++ b/website/docs/r/dataplex_task.html.markdown @@ -287,6 +287,9 @@ The following arguments are supported: (Optional) User-defined labels for the task. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `spark` - (Optional) A service with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time. @@ -514,6 +517,13 @@ In addition to the arguments listed above, the following computed attributes are Configuration for the cluster Structure is [documented below](#nested_execution_status). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `execution_status` block contains: diff --git a/website/docs/r/dataplex_zone.html.markdown b/website/docs/r/dataplex_zone.html.markdown index 05a77186bb2..ceb36b0eac5 100644 --- a/website/docs/r/dataplex_zone.html.markdown +++ b/website/docs/r/dataplex_zone.html.markdown @@ -41,8 +41,8 @@ resource "google_dataplex_zone" "primary" { type = "RAW" description = "Zone for DCL" display_name = "Zone for DCL" - labels = {} project = "my-project-name" + labels = {} } resource "google_dataplex_lake" "basic" { @@ -50,12 +50,11 @@ resource "google_dataplex_lake" "basic" { name = "lake" description = "Lake for DCL" display_name = "Lake for DCL" + project = "my-project-name" labels = { my-lake = "exists" } - - project = "my-project-name" } @@ -136,6 +135,8 @@ The `resource_spec` block supports: * `labels` - (Optional) Optional. User defined labels for the zone. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) @@ -183,9 +184,15 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time when the zone was created. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `state` - Output only. Current state of the zone. Possible values: STATE_UNSPECIFIED, ACTIVE, CREATING, DELETING, ACTION_REQUIRED +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `uid` - Output only. System generated globally unique ID for the zone. This ID will be different if the zone is deleted and re-created with the same name. diff --git a/website/docs/r/dataproc_cluster.html.markdown b/website/docs/r/dataproc_cluster.html.markdown index 1aa6310d0b1..e4efda5aa41 100644 --- a/website/docs/r/dataproc_cluster.html.markdown +++ b/website/docs/r/dataproc_cluster.html.markdown @@ -129,7 +129,14 @@ resource "google_dataproc_cluster" "accelerated_cluster" { * `region` - (Optional) The region in which the cluster and associated nodes will be created in. Defaults to `global`. -* `labels` - (Optional, Computed) The list of labels (key/value pairs) to be applied to +* `labels` - (Optional) The list of labels (key/value pairs) configured on the resource through Terraform and to be applied to + instances in the cluster. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - (Computed) The list of labels (key/value pairs) to be applied to instances in the cluster. GCP generates some itself including `goog-dataproc-cluster-name` which is the name of the cluster. diff --git a/website/docs/r/dataproc_job.html.markdown b/website/docs/r/dataproc_job.html.markdown index 3c8b99fee70..84094a189b4 100644 --- a/website/docs/r/dataproc_job.html.markdown +++ b/website/docs/r/dataproc_job.html.markdown @@ -102,6 +102,14 @@ output "pyspark_status" { job is first cancelled before issuing the delete. * `labels` - (Optional) The list of labels (key/value pairs) to add to the job. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `scheduling.max_failures_per_hour` - (Required) Maximum number of times per hour a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. diff --git a/website/docs/r/dataproc_metastore_federation.html.markdown b/website/docs/r/dataproc_metastore_federation.html.markdown index 741430e87a3..f300ab1b5c7 100644 --- a/website/docs/r/dataproc_metastore_federation.html.markdown +++ b/website/docs/r/dataproc_metastore_federation.html.markdown @@ -136,6 +136,8 @@ The following arguments are supported: * `labels` - (Optional) User-defined labels for the metastore federation. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `location` - (Optional) @@ -166,6 +168,13 @@ In addition to the arguments listed above, the following computed attributes are * `uid` - The globally unique resource identifier of the metastore federation. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/dataproc_metastore_service.html.markdown b/website/docs/r/dataproc_metastore_service.html.markdown index e3cd9b5b996..64174d0d761 100644 --- a/website/docs/r/dataproc_metastore_service.html.markdown +++ b/website/docs/r/dataproc_metastore_service.html.markdown @@ -51,6 +51,10 @@ resource "google_dataproc_metastore_service" "default" { hive_metastore_config { version = "2.3.6" } + + labels = { + env = "test" + } } ``` ## Example Usage - Dataproc Metastore Service Cmek Example @@ -187,6 +191,8 @@ The following arguments are supported: * `labels` - (Optional) User-defined labels for the metastore service. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `network` - (Optional) @@ -425,6 +431,13 @@ In addition to the arguments listed above, the following computed attributes are * `uid` - The globally unique resource identifier of the metastore service. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/datastream_connection_profile.html.markdown b/website/docs/r/datastream_connection_profile.html.markdown index b00939d1602..45bffbfad4e 100644 --- a/website/docs/r/datastream_connection_profile.html.markdown +++ b/website/docs/r/datastream_connection_profile.html.markdown @@ -215,6 +215,8 @@ The following arguments are supported: * `labels` - (Optional) Labels. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `oracle_profile` - (Optional) @@ -413,6 +415,13 @@ In addition to the arguments listed above, the following computed attributes are * `name` - The resource's name. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/datastream_private_connection.html.markdown b/website/docs/r/datastream_private_connection.html.markdown index ad91a07306c..3791939146a 100644 --- a/website/docs/r/datastream_private_connection.html.markdown +++ b/website/docs/r/datastream_private_connection.html.markdown @@ -98,6 +98,8 @@ The following arguments are supported: * `labels` - (Optional) Labels. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -119,6 +121,13 @@ In addition to the arguments listed above, the following computed attributes are The PrivateConnection error in case of failure. Structure is [documented below](#nested_error). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `error` block contains: diff --git a/website/docs/r/datastream_stream.html.markdown b/website/docs/r/datastream_stream.html.markdown index 83b63f8eb61..2df8eff5bd0 100644 --- a/website/docs/r/datastream_stream.html.markdown +++ b/website/docs/r/datastream_stream.html.markdown @@ -1302,6 +1302,8 @@ The following arguments are supported: * `labels` - (Optional) Labels. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `backfill_all` - (Optional) @@ -1554,6 +1556,13 @@ In addition to the arguments listed above, the following computed attributes are * `state` - The state of the stream. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/dialogflow_cx_intent.html.markdown b/website/docs/r/dialogflow_cx_intent.html.markdown index 927a67c8df0..c8efd70c6f6 100644 --- a/website/docs/r/dialogflow_cx_intent.html.markdown +++ b/website/docs/r/dialogflow_cx_intent.html.markdown @@ -126,6 +126,9 @@ The following arguments are supported: Prefix "sys-" is reserved for Dialogflow defined labels. Currently allowed Dialogflow defined labels include: * sys-head * sys-contextual The above labels do not require value. "sys-head" means the intent is a head intent. "sys.contextual" means the intent is a contextual intent. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `description` - (Optional) Human readable description for better understanding an intent like its scope, content, result etc. Maximum character limit: 140 characters. @@ -204,6 +207,13 @@ In addition to the arguments listed above, the following computed attributes are The unique identifier of the intent. Format: projects//locations//agents//intents/. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/dns_managed_zone.html.markdown b/website/docs/r/dns_managed_zone.html.markdown index 4ef99a64e35..f26a4fb1aab 100644 --- a/website/docs/r/dns_managed_zone.html.markdown +++ b/website/docs/r/dns_managed_zone.html.markdown @@ -210,6 +210,7 @@ resource "google_container_cluster" "cluster-1" { cluster_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[1].range_name } + deletion_protection = "true" } ```
@@ -344,6 +345,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this ManagedZone. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `visibility` - (Optional) The zone's visibility: public zones are exposed to the Internet, @@ -555,6 +559,13 @@ In addition to the arguments listed above, the following computed attributes are The time that this resource was created on the server. This is in RFC3339 text format. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/dns_response_policy.html.markdown b/website/docs/r/dns_response_policy.html.markdown index 0b622a1968a..8026caf46a3 100644 --- a/website/docs/r/dns_response_policy.html.markdown +++ b/website/docs/r/dns_response_policy.html.markdown @@ -88,6 +88,7 @@ resource "google_container_cluster" "cluster-1" { cluster_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[0].range_name services_secondary_range_name = google_compute_subnetwork.subnetwork-1.secondary_ip_range[1].range_name } + deletion_protection = "true" } resource "google_dns_response_policy" "example-response-policy" { diff --git a/website/docs/r/eventarc_trigger.html.markdown b/website/docs/r/eventarc_trigger.html.markdown index e63bdf068b8..07453824118 100644 --- a/website/docs/r/eventarc_trigger.html.markdown +++ b/website/docs/r/eventarc_trigger.html.markdown @@ -142,6 +142,8 @@ The `matching_criteria` block supports: * `labels` - (Optional) Optional. User labels attached to the triggers that can be used to group resources. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) @@ -220,9 +222,15 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The creation time. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `etag` - Output only. This checksum is computed by the server based on the value of other fields, and may be sent only on create requests to ensure the client has an up-to-date value before proceeding. +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `uid` - Output only. Server assigned unique identifier for the trigger. The value is a UUID4 string and guaranteed to remain unchanged until the resource is deleted. diff --git a/website/docs/r/filestore_backup.html.markdown b/website/docs/r/filestore_backup.html.markdown index bf740230937..c86014f8ec3 100644 --- a/website/docs/r/filestore_backup.html.markdown +++ b/website/docs/r/filestore_backup.html.markdown @@ -108,6 +108,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -139,6 +142,13 @@ In addition to the arguments listed above, the following computed attributes are * `kms_key_name` - KMS key name used for data encryption. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/filestore_instance.html.markdown b/website/docs/r/filestore_instance.html.markdown index 1f127578aa3..fa80d943ea5 100644 --- a/website/docs/r/filestore_instance.html.markdown +++ b/website/docs/r/filestore_instance.html.markdown @@ -253,6 +253,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `kms_key_name` - (Optional) KMS key name used for data encryption. @@ -284,6 +287,13 @@ In addition to the arguments listed above, the following computed attributes are Server-specified ETag for the instance resource to prevent simultaneous updates from overwriting each other. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/filestore_snapshot.html.markdown b/website/docs/r/filestore_snapshot.html.markdown index cf3e65a8bc5..4deed811a90 100644 --- a/website/docs/r/filestore_snapshot.html.markdown +++ b/website/docs/r/filestore_snapshot.html.markdown @@ -133,6 +133,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -152,6 +155,13 @@ In addition to the arguments listed above, the following computed attributes are * `filesystem_used_bytes` - The amount of bytes needed to allocate a full copy of the snapshot content. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/firebase_hosting_channel.html.markdown b/website/docs/r/firebase_hosting_channel.html.markdown index 0b5d450fbd8..2feab8e047b 100644 --- a/website/docs/r/firebase_hosting_channel.html.markdown +++ b/website/docs/r/firebase_hosting_channel.html.markdown @@ -95,6 +95,8 @@ The following arguments are supported: * `labels` - (Optional) Text labels used for extra metadata and/or filtering + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `expire_time` - (Optional) @@ -119,6 +121,13 @@ In addition to the arguments listed above, the following computed attributes are The fully-qualified resource name for the channel, in the format: sites/SITE_ID/channels/CHANNEL_ID +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/firebase_hosting_site.html.markdown b/website/docs/r/firebase_hosting_site.html.markdown index 9ba9e0462ab..9f8f419a446 100644 --- a/website/docs/r/firebase_hosting_site.html.markdown +++ b/website/docs/r/firebase_hosting_site.html.markdown @@ -48,7 +48,6 @@ resource "google_firebase_web_app" "default" { provider = google-beta project = "my-project-name" display_name = "Test web app for Firebase Hosting" - deletion_policy = "DELETE" } resource "google_firebase_hosting_site" "full" { diff --git a/website/docs/r/firebase_project_location.html.markdown b/website/docs/r/firebase_project_location.html.markdown deleted file mode 100644 index 1f1d13e632a..00000000000 --- a/website/docs/r/firebase_project_location.html.markdown +++ /dev/null @@ -1,115 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Firebase" -description: |- - Sets the default Google Cloud Platform (GCP) resource location for the specified FirebaseProject. ---- - -# google\_firebase\_project\_location -~> **Warning:** `google_firebase_project_location` is deprecated in favor of explicitly configuring `google_app_engine_application` and `google_firestore_database`. This resource will be removed in the next major release of the provider. - -Sets the default Google Cloud Platform (GCP) resource location for the specified FirebaseProject. -This method creates an App Engine application with a default Cloud Storage bucket, located in the specified -locationId. This location must be one of the available GCP resource locations. -After the default GCP resource location is finalized, or if it was already set, it cannot be changed. -The default GCP resource location for the specified FirebaseProject might already be set because either the -GCP Project already has an App Engine application or defaultLocation.finalize was previously called with a -specified locationId. Any new calls to defaultLocation.finalize with a different specified locationId will -return a 409 error. - -~> **Warning:** This resource is in beta, and should be used with the terraform-provider-google-beta provider. -See [Provider Versions](https://terraform.io/docs/providers/google/guides/provider_versions.html) for more details on beta resources. - -To get more information about ProjectLocation, see: - -* [API documentation](https://firebase.google.com/docs/reference/firebase-management/rest/v1beta1/projects.defaultLocation/finalize) -* How-to Guides - * [Official Documentation](https://firebase.google.com/) - -## Example Usage - Firebase Project Location Basic - - -```hcl -resource "google_project" "default" { - provider = google-beta - - project_id = "my-project" - name = "my-project" - org_id = "123456789" - - labels = { - "firebase" = "enabled" - } -} - -resource "google_firebase_project" "default" { - provider = google-beta - project = google_project.default.project_id -} - -resource "google_firebase_project_location" "basic" { - provider = google-beta - project = google_firebase_project.default.project - - location_id = "us-central" -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `location_id` - - (Required) - The ID of the default GCP resource location for the Project. The location must be one of the available GCP - resource locations. - - -- - - - - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}` - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -ProjectLocation can be imported using any of these accepted formats: - -``` -$ terraform import google_firebase_project_location.default projects/{{project}} -$ terraform import google_firebase_project_location.default {{project}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/firebase_web_app.html.markdown b/website/docs/r/firebase_web_app.html.markdown index 3fa314eeb99..ac9ad6f6f6f 100644 --- a/website/docs/r/firebase_web_app.html.markdown +++ b/website/docs/r/firebase_web_app.html.markdown @@ -34,30 +34,10 @@ To get more information about WebApp, see: ```hcl -resource "google_project" "default" { - provider = google-beta - - project_id = "my-project" - name = "my-project" - org_id = "123456789" - - labels = { - "firebase" = "enabled" - } -} - -resource "google_firebase_project" "default" { - provider = google-beta - project = google_project.default.project_id -} - resource "google_firebase_web_app" "basic" { provider = google-beta - project = google_project.default.project_id + project = "my-project-name" display_name = "Display Name Basic" - deletion_policy = "DELETE" - - depends_on = [google_firebase_project.default] } data "google_firebase_web_app_config" "basic" { @@ -137,7 +117,7 @@ The following arguments are supported: * `deletion_policy` - (Optional) Set to `ABANDON` to allow the WebApp to be untracked from terraform state rather than deleted upon `terraform destroy`. This is useful becaue the WebApp may be -serving traffic. Set to `DELETE` to delete the WebApp. Default to `ABANDON` +serving traffic. Set to `DELETE` to delete the WebApp. Default to `DELETE` ## Attributes Reference diff --git a/website/docs/r/firebaserules_release.html.markdown b/website/docs/r/firebaserules_release.html.markdown index c24f1e0670b..c6b65bc215a 100644 --- a/website/docs/r/firebaserules_release.html.markdown +++ b/website/docs/r/firebaserules_release.html.markdown @@ -143,7 +143,6 @@ This resource provides the following [Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. - `delete` - Default is 20 minutes. ## Import diff --git a/website/docs/r/game_services_game_server_cluster.html.markdown b/website/docs/r/game_services_game_server_cluster.html.markdown deleted file mode 100644 index 27ba4e86b43..00000000000 --- a/website/docs/r/game_services_game_server_cluster.html.markdown +++ /dev/null @@ -1,158 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Game Servers" -description: |- - A game server cluster resource. ---- - -# google\_game\_services\_game\_server\_cluster - -A game server cluster resource. - - -To get more information about GameServerCluster, see: - -* [API documentation](https://cloud.google.com/game-servers/docs/reference/rest/v1beta/projects.locations.realms.gameServerClusters) -* How-to Guides - * [Official Documentation](https://cloud.google.com/game-servers/docs) - -## Example Usage - Game Service Cluster Basic - - -```hcl -resource "google_game_services_game_server_cluster" "default" { - - cluster_id = "" - realm_id = google_game_services_realm.default.realm_id - - connection_info { - gke_cluster_reference { - cluster = "locations/us-west1/clusters/%{agones_cluster}" - } - namespace = "default" - } -} - -resource "google_game_services_realm" "default" { - realm_id = "realm" - time_zone = "PST8PDT" - - description = "Test Game Realm" -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `cluster_id` - - (Required) - Required. The resource name of the game server cluster - -* `realm_id` - - (Required) - The realm id of the game server realm. - -* `connection_info` - - (Required) - Game server cluster connection information. This information is used to - manage game server clusters. - Structure is [documented below](#nested_connection_info). - - -The `connection_info` block supports: - -* `gke_cluster_reference` - - (Required) - Reference of the GKE cluster where the game servers are installed. - Structure is [documented below](#nested_gke_cluster_reference). - -* `namespace` - - (Required) - Namespace designated on the game server cluster where the game server - instances will be created. The namespace existence will be validated - during creation. - - -The `gke_cluster_reference` block supports: - -* `cluster` - - (Required) - The full or partial name of a GKE cluster, using one of the following - forms: - * `projects/{project_id}/locations/{location}/clusters/{cluster_id}` - * `locations/{location}/clusters/{cluster_id}` - * `{cluster_id}` - If project and location are not specified, the project and location of the - GameServerCluster resource are used to generate the full name of the - GKE cluster. - -- - - - - -* `location` - - (Optional) - Location of the Cluster. - -* `labels` - - (Optional) - The labels associated with this game server cluster. Each label is a - key-value pair. - -* `description` - - (Optional) - Human readable description of the cluster. - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}}` - -* `name` - - The resource id of the game server cluster, eg: - `projects/{project_id}/locations/{location}/realms/{realm_id}/gameServerClusters/{cluster_id}`. - For example, - `projects/my-project/locations/{location}/realms/zanzibar/gameServerClusters/my-onprem-cluster`. - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -GameServerCluster can be imported using any of these accepted formats: - -``` -$ terraform import google_game_services_game_server_cluster.default projects/{{project}}/locations/{{location}}/realms/{{realm_id}}/gameServerClusters/{{cluster_id}} -$ terraform import google_game_services_game_server_cluster.default {{project}}/{{location}}/{{realm_id}}/{{cluster_id}} -$ terraform import google_game_services_game_server_cluster.default {{location}}/{{realm_id}}/{{cluster_id}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/game_services_game_server_config.html.markdown b/website/docs/r/game_services_game_server_config.html.markdown deleted file mode 100644 index 104bbec421f..00000000000 --- a/website/docs/r/game_services_game_server_config.html.markdown +++ /dev/null @@ -1,215 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Game Servers" -description: |- - A game server config resource. ---- - -# google\_game\_services\_game\_server\_config - -A game server config resource. Configs are global and immutable. - - -To get more information about GameServerConfig, see: - -* [API documentation](https://cloud.google.com/game-servers/docs/reference/rest/v1beta/projects.locations.gameServerDeployments.configs) -* How-to Guides - * [Official Documentation](https://cloud.google.com/game-servers/docs) - -## Example Usage - Game Service Config Basic - - -```hcl -resource "google_game_services_game_server_deployment" "default" { - deployment_id = "tf-test-deployment" - description = "a deployment description" -} - -resource "google_game_services_game_server_config" "default" { - config_id = "tf-test-config" - deployment_id = google_game_services_game_server_deployment.default.deployment_id - description = "a config description" - - fleet_configs { - name = "something-unique" - fleet_spec = jsonencode({ "replicas" : 1, "scheduling" : "Packed", "template" : { "metadata" : { "name" : "tf-test-game-server-template" }, "spec" : { "ports": [{"name": "default", "portPolicy": "Dynamic", "containerPort": 7654, "protocol": "UDP"}], "template" : { "spec" : { "containers" : [{ "name" : "simple-udp-server", "image" : "gcr.io/agones-images/udp-server:0.14" }] } } } } }) - } - - scaling_configs { - name = "scaling-config-name" - fleet_autoscaler_spec = jsonencode({"policy": {"type": "Webhook","webhook": {"service": {"name": "autoscaler-webhook-service","namespace": "default","path": "scale"}}}}) - selectors { - labels = { - "one" : "two" - } - } - - schedules { - cron_job_duration = "3.500s" - cron_spec = "0 0 * * 0" - } - } -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `config_id` - - (Required) - A unique id for the deployment config. - -* `deployment_id` - - (Required) - A unique id for the deployment. - -* `fleet_configs` - - (Required) - The fleet config contains list of fleet specs. In the Single Cloud, there - will be only one. - Structure is [documented below](#nested_fleet_configs). - - -The `fleet_configs` block supports: - -* `fleet_spec` - - (Required) - The fleet spec, which is sent to Agones to configure fleet. - The spec can be passed as inline json but it is recommended to use a file reference - instead. File references can contain the json or yaml format of the fleet spec. Eg: - * fleet_spec = jsonencode(yamldecode(file("fleet_configs.yaml"))) - * fleet_spec = file("fleet_configs.json") - The format of the spec can be found : - `https://agones.dev/site/docs/reference/fleet/`. - -* `name` - - (Required) - The name of the FleetConfig. - -- - - - - -* `location` - - (Optional) - Location of the Deployment. - -* `description` - - (Optional) - The description of the game server config. - -* `labels` - - (Optional) - The labels associated with this game server config. Each label is a - key-value pair. - -* `scaling_configs` - - (Optional) - Optional. This contains the autoscaling settings. - Structure is [documented below](#nested_scaling_configs). - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -The `scaling_configs` block supports: - -* `name` - - (Required) - The name of the ScalingConfig - -* `fleet_autoscaler_spec` - - (Required) - Fleet autoscaler spec, which is sent to Agones. - Example spec can be found : - https://agones.dev/site/docs/reference/fleetautoscaler/ - -* `selectors` - - (Optional) - Labels used to identify the clusters to which this scaling config - applies. A cluster is subject to this scaling config if its labels match - any of the selector entries. - Structure is [documented below](#nested_selectors). - -* `schedules` - - (Optional) - The schedules to which this scaling config applies. - Structure is [documented below](#nested_schedules). - - -The `selectors` block supports: - -* `labels` - - (Optional) - Set of labels to group by. - -The `schedules` block supports: - -* `start_time` - - (Optional) - The start time of the event. - A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". - -* `end_time` - - (Optional) - The end time of the event. - A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z". - -* `cron_job_duration` - - (Optional) - The duration for the cron job event. The duration of the event is effective - after the cron job's start time. - A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". - -* `cron_spec` - - (Optional) - The cron definition of the scheduled event. See - https://en.wikipedia.org/wiki/Cron. Cron spec specifies the local time as - defined by the realm. - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}}` - -* `name` - - The resource name of the game server config, in the form: - `projects/{project_id}/locations/{location}/gameServerDeployments/{deployment_id}/configs/{config_id}`. - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -GameServerConfig can be imported using any of these accepted formats: - -``` -$ terraform import google_game_services_game_server_config.default projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}/configs/{{config_id}} -$ terraform import google_game_services_game_server_config.default {{project}}/{{location}}/{{deployment_id}}/{{config_id}} -$ terraform import google_game_services_game_server_config.default {{location}}/{{deployment_id}}/{{config_id}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/game_services_game_server_deployment.html.markdown b/website/docs/r/game_services_game_server_deployment.html.markdown deleted file mode 100644 index 7862e282f0f..00000000000 --- a/website/docs/r/game_services_game_server_deployment.html.markdown +++ /dev/null @@ -1,106 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Game Servers" -description: |- - A game server deployment resource. ---- - -# google\_game\_services\_game\_server\_deployment - -A game server deployment resource. - - -To get more information about GameServerDeployment, see: - -* [API documentation](https://cloud.google.com/game-servers/docs/reference/rest/v1beta/projects.locations.gameServerDeployments) -* How-to Guides - * [Official Documentation](https://cloud.google.com/game-servers/docs) - -## Example Usage - Game Service Deployment Basic - - -```hcl -resource "google_game_services_game_server_deployment" "default" { - deployment_id = "tf-test-deployment" - description = "a deployment description" -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `deployment_id` - - (Required) - A unique id for the deployment. - - -- - - - - -* `description` - - (Optional) - Human readable description of the game server deployment. - -* `location` - - (Optional) - Location of the Deployment. - -* `labels` - - (Optional) - The labels associated with this game server deployment. Each label is a - key-value pair. - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}}` - -* `name` - - The resource id of the game server deployment, eg: - `projects/{project_id}/locations/{location}/gameServerDeployments/{deployment_id}`. - For example, - `projects/my-project/locations/{location}/gameServerDeployments/my-deployment`. - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -GameServerDeployment can be imported using any of these accepted formats: - -``` -$ terraform import google_game_services_game_server_deployment.default projects/{{project}}/locations/{{location}}/gameServerDeployments/{{deployment_id}} -$ terraform import google_game_services_game_server_deployment.default {{project}}/{{location}}/{{deployment_id}} -$ terraform import google_game_services_game_server_deployment.default {{location}}/{{deployment_id}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/game_services_game_server_deployment_rollout.html.markdown b/website/docs/r/game_services_game_server_deployment_rollout.html.markdown deleted file mode 100644 index 067009f4522..00000000000 --- a/website/docs/r/game_services_game_server_deployment_rollout.html.markdown +++ /dev/null @@ -1,143 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Game Servers" -description: |- - This represents the rollout state. ---- - -# google\_game\_services\_game\_server\_deployment\_rollout - -This represents the rollout state. This is part of the game server -deployment. - - -To get more information about GameServerDeploymentRollout, see: - -* [API documentation](https://cloud.google.com/game-servers/docs/reference/rest/v1beta/GameServerDeploymentRollout) -* How-to Guides - * [Official Documentation](https://cloud.google.com/game-servers/docs) - -## Example Usage - Game Service Deployment Rollout Basic - - -```hcl -resource "google_game_services_game_server_deployment" "default" { - deployment_id = "tf-test-deployment" - description = "a deployment description" -} - -resource "google_game_services_game_server_config" "default" { - config_id = "tf-test-config" - deployment_id = google_game_services_game_server_deployment.default.deployment_id - description = "a config description" - - fleet_configs { - name = "some-non-guid" - fleet_spec = jsonencode({ "replicas" : 1, "scheduling" : "Packed", "template" : { "metadata" : { "name" : "tf-test-game-server-template" }, "spec" : { "ports": [{"name": "default", "portPolicy": "Dynamic", "containerPort": 7654, "protocol": "UDP"}], "template" : { "spec" : { "containers" : [{ "name" : "simple-udp-server", "image" : "gcr.io/agones-images/udp-server:0.14" }] } } } } }) - - // Alternate usage: - // fleet_spec = file(fleet_configs.json) - } -} - -resource "google_game_services_game_server_deployment_rollout" "default" { - deployment_id = google_game_services_game_server_deployment.default.deployment_id - default_game_server_config = google_game_services_game_server_config.default.name -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `deployment_id` - - (Required) - The deployment to rollout the new config to. Only 1 rollout must be associated with each deployment. - -* `default_game_server_config` - - (Required) - This field points to the game server config that is - applied by default to all realms and clusters. For example, - `projects/my-project/locations/global/gameServerDeployments/my-game/configs/my-config`. - - -- - - - - -* `game_server_config_overrides` - - (Optional) - The game_server_config_overrides contains the per game server config - overrides. The overrides are processed in the order they are listed. As - soon as a match is found for a cluster, the rest of the list is not - processed. - Structure is [documented below](#nested_game_server_config_overrides). - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -The `game_server_config_overrides` block supports: - -* `realms_selector` - - (Optional) - Selection by realms. - Structure is [documented below](#nested_realms_selector). - -* `config_version` - - (Optional) - Version of the configuration. - - -The `realms_selector` block supports: - -* `realms` - - (Optional) - List of realms to match against. - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout` - -* `name` - - The resource id of the game server deployment - eg: `projects/my-project/locations/global/gameServerDeployments/my-deployment/rollout`. - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -GameServerDeploymentRollout can be imported using any of these accepted formats: - -``` -$ terraform import google_game_services_game_server_deployment_rollout.default projects/{{project}}/locations/global/gameServerDeployments/{{deployment_id}}/rollout -$ terraform import google_game_services_game_server_deployment_rollout.default {{project}}/{{deployment_id}} -$ terraform import google_game_services_game_server_deployment_rollout.default {{deployment_id}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/game_services_realm.html.markdown b/website/docs/r/game_services_realm.html.markdown deleted file mode 100644 index 9ab9a1b4ed6..00000000000 --- a/website/docs/r/game_services_realm.html.markdown +++ /dev/null @@ -1,116 +0,0 @@ ---- -# ---------------------------------------------------------------------------- -# -# *** AUTO GENERATED CODE *** Type: MMv1 *** -# -# ---------------------------------------------------------------------------- -# -# This file is automatically generated by Magic Modules and manual -# changes will be clobbered when the file is regenerated. -# -# Please read more about how to change this file in -# .github/CONTRIBUTING.md. -# -# ---------------------------------------------------------------------------- -subcategory: "Game Servers" -description: |- - A Realm resource. ---- - -# google\_game\_services\_realm - -A Realm resource. - - -To get more information about Realm, see: - -* [API documentation](https://cloud.google.com/game-servers/docs/reference/rest/v1beta/projects.locations.realms) -* How-to Guides - * [Official Documentation](https://cloud.google.com/game-servers/docs) - -## Example Usage - Game Service Realm Basic - - -```hcl -resource "google_game_services_realm" "default" { - realm_id = "tf-test-realm" - time_zone = "EST" - location = "global" - - description = "one of the nine" -} -``` - -## Argument Reference - -The following arguments are supported: - - -* `time_zone` - - (Required) - Required. Time zone where all realm-specific policies are evaluated. The value of - this field must be from the IANA time zone database: - https://www.iana.org/time-zones. - -* `realm_id` - - (Required) - GCP region of the Realm. - - -- - - - - -* `labels` - - (Optional) - The labels associated with this realm. Each label is a key-value pair. - -* `description` - - (Optional) - Human readable description of the realm. - -* `location` - - (Optional) - Location of the Realm. - -* `project` - (Optional) The ID of the project in which the resource belongs. - If it is not provided, the provider project is used. - - -## Attributes Reference - -In addition to the arguments listed above, the following computed attributes are exported: - -* `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/realms/{{realm_id}}` - -* `name` - - The resource id of the realm, of the form: - `projects/{project_id}/locations/{location}/realms/{realm_id}`. For - example, `projects/my-project/locations/{location}/realms/my-realm`. - -* `etag` - - ETag of the resource. - - -## Timeouts - -This resource provides the following -[Timeouts](https://developer.hashicorp.com/terraform/plugin/sdkv2/resources/retries-and-customizable-timeouts) configuration options: - -- `create` - Default is 20 minutes. -- `update` - Default is 20 minutes. -- `delete` - Default is 20 minutes. - -## Import - - -Realm can be imported using any of these accepted formats: - -``` -$ terraform import google_game_services_realm.default projects/{{project}}/locations/{{location}}/realms/{{realm_id}} -$ terraform import google_game_services_realm.default {{project}}/{{location}}/{{realm_id}} -$ terraform import google_game_services_realm.default {{location}}/{{realm_id}} -``` - -## User Project Overrides - -This resource supports [User Project Overrides](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#user_project_override). diff --git a/website/docs/r/gke_backup_backup_plan.html.markdown b/website/docs/r/gke_backup_backup_plan.html.markdown index 6db204fde3a..23427275a5f 100644 --- a/website/docs/r/gke_backup_backup_plan.html.markdown +++ b/website/docs/r/gke_backup_backup_plan.html.markdown @@ -44,6 +44,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "true" } resource "google_gke_backup_backup_plan" "basic" { @@ -80,6 +81,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "true" } resource "google_gke_backup_backup_plan" "autopilot" { @@ -109,6 +111,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "true" } resource "google_gke_backup_backup_plan" "cmek" { @@ -153,6 +156,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "true" } resource "google_gke_backup_backup_plan" "full" { @@ -219,6 +223,9 @@ The following arguments are supported: A list of key->value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `backup_schedule` - (Optional) Defines a schedule for automatic Backup creation via this BackupPlan. @@ -371,6 +378,13 @@ In addition to the arguments listed above, the following computed attributes are * `state_reason` - Detailed description of why BackupPlan is in its current state. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/gke_backup_restore_plan.html.markdown b/website/docs/r/gke_backup_restore_plan.html.markdown index 2ae4fa96241..6ff790a1b8d 100644 --- a/website/docs/r/gke_backup_restore_plan.html.markdown +++ b/website/docs/r/gke_backup_restore_plan.html.markdown @@ -44,6 +44,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "" } resource "google_gke_backup_backup_plan" "basic" { @@ -89,6 +90,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "" } resource "google_gke_backup_backup_plan" "basic" { @@ -143,6 +145,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "" } resource "google_gke_backup_backup_plan" "basic" { @@ -192,6 +195,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "" } resource "google_gke_backup_backup_plan" "basic" { @@ -236,6 +240,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "" } resource "google_gke_backup_backup_plan" "basic" { @@ -307,6 +312,7 @@ resource "google_container_cluster" "primary" { enabled = true } } + deletion_protection = "" } resource "google_gke_backup_backup_plan" "basic" { @@ -650,6 +656,9 @@ The following arguments are supported: A list of key->value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -669,6 +678,13 @@ In addition to the arguments listed above, the following computed attributes are * `state_reason` - Detailed description of why RestorePlan is in its current state. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/gke_hub_feature.html.markdown b/website/docs/r/gke_hub_feature.html.markdown index 8c385ee9359..1b06f0e3e8b 100644 --- a/website/docs/r/gke_hub_feature.html.markdown +++ b/website/docs/r/gke_hub_feature.html.markdown @@ -157,6 +157,8 @@ The following arguments are supported: * `labels` - (Optional) GCP labels for this Feature. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `spec` - (Optional) @@ -244,6 +246,13 @@ In addition to the arguments listed above, the following computed attributes are * `delete_time` - Output only. When the Feature resource was deleted. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `resource_state` block contains: diff --git a/website/docs/r/gke_hub_membership.html.markdown b/website/docs/r/gke_hub_membership.html.markdown index 149e855e50b..29b0d2628ae 100644 --- a/website/docs/r/gke_hub_membership.html.markdown +++ b/website/docs/r/gke_hub_membership.html.markdown @@ -41,6 +41,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "true" } resource "google_gke_hub_membership" "membership" { @@ -50,6 +51,10 @@ resource "google_gke_hub_membership" "membership" { resource_link = "//container.googleapis.com/${google_container_cluster.primary.id}" } } + + labels = { + env = "test" + } } ``` ## Example Usage - Gkehub Membership Issuer @@ -63,6 +68,7 @@ resource "google_container_cluster" "primary" { workload_identity_config { workload_pool = "my-project-name.svc.id.goog" } + deletion_protection = "true" } resource "google_gke_hub_membership" "membership" { @@ -101,6 +107,9 @@ The following arguments are supported: (Optional) Labels to apply to this membership. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `endpoint` - (Optional) If this Membership is a Kubernetes API server hosted on GKE, this is a self link to its GCP resource. @@ -151,6 +160,13 @@ In addition to the arguments listed above, the following computed attributes are * `name` - The unique identifier of the membership. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/gke_hub_membership_binding.html.markdown b/website/docs/r/gke_hub_membership_binding.html.markdown index 52f0cf63f53..399ffd661c6 100644 --- a/website/docs/r/gke_hub_membership_binding.html.markdown +++ b/website/docs/r/gke_hub_membership_binding.html.markdown @@ -36,6 +36,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "true" } resource "google_gke_hub_membership" "example" { @@ -100,6 +101,9 @@ The following arguments are supported: (Optional) Labels for this Membership binding. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -129,6 +133,13 @@ In addition to the arguments listed above, the following computed attributes are State of the membership binding resource. Structure is [documented below](#nested_state). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `state` block contains: diff --git a/website/docs/r/gke_hub_membership_rbac_role_binding.html.markdown b/website/docs/r/gke_hub_membership_rbac_role_binding.html.markdown index 67a719ad7f6..649c7757117 100644 --- a/website/docs/r/gke_hub_membership_rbac_role_binding.html.markdown +++ b/website/docs/r/gke_hub_membership_rbac_role_binding.html.markdown @@ -39,6 +39,7 @@ resource "google_container_cluster" "primary" { name = "basiccluster" location = "us-central1-a" initial_node_count = 1 + deletion_protection = "true" } resource "google_gke_hub_membership" "membershiprbacrolebinding" { diff --git a/website/docs/r/gke_hub_namespace.html.markdown b/website/docs/r/gke_hub_namespace.html.markdown index cc5529657d0..9760fbbd635 100644 --- a/website/docs/r/gke_hub_namespace.html.markdown +++ b/website/docs/r/gke_hub_namespace.html.markdown @@ -88,6 +88,9 @@ The following arguments are supported: (Optional) Labels for this Namespace. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -117,6 +120,13 @@ In addition to the arguments listed above, the following computed attributes are State of the namespace resource. Structure is [documented below](#nested_state). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `state` block contains: diff --git a/website/docs/r/gke_hub_scope.html.markdown b/website/docs/r/gke_hub_scope.html.markdown index 92e8cc1587b..c59086d7fd0 100644 --- a/website/docs/r/gke_hub_scope.html.markdown +++ b/website/docs/r/gke_hub_scope.html.markdown @@ -59,6 +59,9 @@ The following arguments are supported: (Optional) Labels for this Scope. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -88,6 +91,13 @@ In addition to the arguments listed above, the following computed attributes are State of the scope resource. Structure is [documented below](#nested_state). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `state` block contains: diff --git a/website/docs/r/gke_hub_scope_rbac_role_binding.html.markdown b/website/docs/r/gke_hub_scope_rbac_role_binding.html.markdown index 819d7c81817..3b543199f07 100644 --- a/website/docs/r/gke_hub_scope_rbac_role_binding.html.markdown +++ b/website/docs/r/gke_hub_scope_rbac_role_binding.html.markdown @@ -96,6 +96,9 @@ The following arguments are supported: (Optional) Labels for this ScopeRBACRoleBinding. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -125,6 +128,13 @@ In addition to the arguments listed above, the following computed attributes are State of the RBAC Role Binding resource. Structure is [documented below](#nested_state). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `state` block contains: diff --git a/website/docs/r/gkeonprem_bare_metal_admin_cluster.html.markdown b/website/docs/r/gkeonprem_bare_metal_admin_cluster.html.markdown index 4aaa9ce8af3..206893d011f 100644 --- a/website/docs/r/gkeonprem_bare_metal_admin_cluster.html.markdown +++ b/website/docs/r/gkeonprem_bare_metal_admin_cluster.html.markdown @@ -99,7 +99,9 @@ resource "google_gkeonprem_bare_metal_admin_cluster" "admin-cluster-basic" { location = "us-west1" description = "test description" bare_metal_version = "1.13.4" - annotations = {} + annotations = { + env = "test" + } network_config { island_mode_cidr { service_address_cidr_blocks = ["172.26.0.0/16"] @@ -605,6 +607,9 @@ In addition to the arguments listed above, the following computed attributes are Specifies the security related settings for the Bare Metal Admin Cluster. Structure is [documented below](#nested_validation_check). +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `fleet` block contains: @@ -706,7 +711,7 @@ This resource provides the following - `create` - Default is 60 minutes. - `update` - Default is 60 minutes. -- `delete` - Default is 60 minutes. +- `delete` - Default is 20 minutes. ## Import diff --git a/website/docs/r/gkeonprem_bare_metal_cluster.html.markdown b/website/docs/r/gkeonprem_bare_metal_cluster.html.markdown index 9a8bdfba4ea..6443c8ae51c 100644 --- a/website/docs/r/gkeonprem_bare_metal_cluster.html.markdown +++ b/website/docs/r/gkeonprem_bare_metal_cluster.html.markdown @@ -1101,6 +1101,9 @@ In addition to the arguments listed above, the following computed attributes are Specifies the security related settings for the Bare Metal User Cluster. Structure is [documented below](#nested_validation_check). +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `fleet` block contains: diff --git a/website/docs/r/gkeonprem_bare_metal_node_pool.html.markdown b/website/docs/r/gkeonprem_bare_metal_node_pool.html.markdown index 25b01eabe18..457db8dcdfd 100644 --- a/website/docs/r/gkeonprem_bare_metal_node_pool.html.markdown +++ b/website/docs/r/gkeonprem_bare_metal_node_pool.html.markdown @@ -356,6 +356,9 @@ In addition to the arguments listed above, the following computed attributes are Allows clients to perform consistent read-modify-writes through optimistic concurrency control. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `status` block contains: diff --git a/website/docs/r/gkeonprem_vmware_cluster.html.markdown b/website/docs/r/gkeonprem_vmware_cluster.html.markdown index 778d4e18bf1..dc1b3701b73 100644 --- a/website/docs/r/gkeonprem_vmware_cluster.html.markdown +++ b/website/docs/r/gkeonprem_vmware_cluster.html.markdown @@ -724,6 +724,9 @@ In addition to the arguments listed above, the following computed attributes are ResourceStatus representing detailed cluster state. Structure is [documented below](#nested_status). +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `validation_check` block contains: diff --git a/website/docs/r/gkeonprem_vmware_node_pool.html.markdown b/website/docs/r/gkeonprem_vmware_node_pool.html.markdown index 2e533746689..e2ac709bbc8 100644 --- a/website/docs/r/gkeonprem_vmware_node_pool.html.markdown +++ b/website/docs/r/gkeonprem_vmware_node_pool.html.markdown @@ -342,6 +342,9 @@ In addition to the arguments listed above, the following computed attributes are * `on_prem_version` - Anthos version for the node pool. Defaults to the user cluster version. +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `status` block contains: diff --git a/website/docs/r/google_project.html.markdown b/website/docs/r/google_project.html.markdown index 36f5eb06919..1f3a4dff836 100644 --- a/website/docs/r/google_project.html.markdown +++ b/website/docs/r/google_project.html.markdown @@ -82,6 +82,14 @@ The following arguments are supported: without deleting the Project via the Google API. * `labels` - (Optional) A set of key/value label pairs to assign to the project. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field 'effective_labels' for all of the labels present on the resource. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. * `auto_create_network` - (Optional) Controls whether the 'default' network exists on the project. Defaults to `true`, where it is created. If set to `false`, the default network will still be created by GCP but diff --git a/website/docs/r/healthcare_consent_store.html.markdown b/website/docs/r/healthcare_consent_store.html.markdown index f77d511f72d..81e3879df28 100644 --- a/website/docs/r/healthcare_consent_store.html.markdown +++ b/website/docs/r/healthcare_consent_store.html.markdown @@ -145,6 +145,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + ## Attributes Reference @@ -152,6 +155,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `{{dataset}}/consentStores/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/healthcare_dicom_store.html.markdown b/website/docs/r/healthcare_dicom_store.html.markdown index 8f201e74cb3..e59a94cb868 100644 --- a/website/docs/r/healthcare_dicom_store.html.markdown +++ b/website/docs/r/healthcare_dicom_store.html.markdown @@ -153,6 +153,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `notification_config` - (Optional) A nested object resource @@ -199,6 +202,13 @@ In addition to the arguments listed above, the following computed attributes are * `self_link` - The fully qualified name of this dataset +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/healthcare_fhir_store.html.markdown b/website/docs/r/healthcare_fhir_store.html.markdown index d2523ce08dc..b1c58715ad7 100644 --- a/website/docs/r/healthcare_fhir_store.html.markdown +++ b/website/docs/r/healthcare_fhir_store.html.markdown @@ -280,6 +280,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `notification_config` - (Optional) A nested object resource @@ -420,6 +423,13 @@ In addition to the arguments listed above, the following computed attributes are * `self_link` - The fully qualified name of this dataset +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/healthcare_hl7_v2_store.html.markdown b/website/docs/r/healthcare_hl7_v2_store.html.markdown index 7114176b5dc..a58f84f8c2c 100644 --- a/website/docs/r/healthcare_hl7_v2_store.html.markdown +++ b/website/docs/r/healthcare_hl7_v2_store.html.markdown @@ -229,6 +229,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `notification_configs` - (Optional) A list of notification configs. Each configuration uses a filter to determine whether to publish a @@ -310,6 +313,13 @@ In addition to the arguments listed above, the following computed attributes are * `self_link` - The fully qualified name of this dataset +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/kms_crypto_key.html.markdown b/website/docs/r/kms_crypto_key.html.markdown index b1f6e65e423..56c008ab01b 100644 --- a/website/docs/r/kms_crypto_key.html.markdown +++ b/website/docs/r/kms_crypto_key.html.markdown @@ -102,6 +102,9 @@ The following arguments are supported: (Optional) Labels with user-defined metadata to apply to this resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `purpose` - (Optional) The immutable purpose of this CryptoKey. See the @@ -153,6 +156,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `{{key_ring}}/cryptoKeys/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/logging_metric.html.markdown b/website/docs/r/logging_metric.html.markdown index 5f5664feef5..39fb072b8c6 100644 --- a/website/docs/r/logging_metric.html.markdown +++ b/website/docs/r/logging_metric.html.markdown @@ -300,29 +300,29 @@ The following arguments are supported: The `linear_buckets` block supports: * `num_finite_buckets` - - (Optional) + (Required) Must be greater than 0. * `width` - - (Optional) + (Required) Must be greater than 0. * `offset` - - (Optional) + (Required) Lower bound of the first bucket. The `exponential_buckets` block supports: * `num_finite_buckets` - - (Optional) + (Required) Must be greater than 0. * `growth_factor` - - (Optional) + (Required) Must be greater than 1. * `scale` - - (Optional) + (Required) Must be greater than 0. The `explicit_buckets` block supports: diff --git a/website/docs/r/looker_instance.html.markdown b/website/docs/r/looker_instance.html.markdown index 30b5b677c9f..6b9b68fa4f6 100644 --- a/website/docs/r/looker_instance.html.markdown +++ b/website/docs/r/looker_instance.html.markdown @@ -120,7 +120,7 @@ resource "google_looker_instance" "looker-instance" { private_ip_enabled = true public_ip_enabled = false reserved_range = "${google_compute_global_address.looker_range.name}" - consumer_network = data.google_compute_network.looker_network.id + consumer_network = google_compute_network.looker_network.id admin_settings { allowed_email_domains = ["google.com"] } @@ -164,7 +164,7 @@ resource "google_looker_instance" "looker-instance" { } resource "google_service_networking_connection" "looker_vpc_connection" { - network = data.google_compute_network.looker_network.id + network = google_compute_network.looker_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.looker_range.name] } @@ -174,12 +174,12 @@ resource "google_compute_global_address" "looker_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 20 - network = data.google_compute_network.looker_network.id + network = google_compute_network.looker_network.id } data "google_project" "project" {} -data "google_compute_network" "looker_network" { +resource "google_compute_network" "looker_network" { name = "looker-network" } @@ -247,9 +247,8 @@ The following arguments are supported: - LOOKER_CORE_STANDARD_ANNUAL: subscription standard instance - LOOKER_CORE_ENTERPRISE_ANNUAL: subscription enterprise instance - LOOKER_CORE_EMBED_ANNUAL: subscription embed instance - - LOOKER_MODELER: standalone modeling service Default value is `LOOKER_CORE_TRIAL`. - Possible values are: `LOOKER_CORE_TRIAL`, `LOOKER_CORE_STANDARD`, `LOOKER_CORE_STANDARD_ANNUAL`, `LOOKER_CORE_ENTERPRISE_ANNUAL`, `LOOKER_CORE_EMBED_ANNUAL`, `LOOKER_MODELER`. + Possible values are: `LOOKER_CORE_TRIAL`, `LOOKER_CORE_STANDARD`, `LOOKER_CORE_STANDARD_ANNUAL`, `LOOKER_CORE_ENTERPRISE_ANNUAL`, `LOOKER_CORE_EMBED_ANNUAL`. * `private_ip_enabled` - (Optional) diff --git a/website/docs/r/memcache_instance.html.markdown b/website/docs/r/memcache_instance.html.markdown index 29fb6b7bd51..56f94a9041a 100644 --- a/website/docs/r/memcache_instance.html.markdown +++ b/website/docs/r/memcache_instance.html.markdown @@ -45,7 +45,7 @@ To get more information about Instance, see: // If this network hasn't been created and you are using this example in your // config, add an additional network resource or change // this from "data"to "resource" -data "google_compute_network" "memcache_network" { +resource "google_compute_network" "memcache_network" { name = "test-network" } @@ -54,11 +54,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.memcache_network.id + network = google_compute_network.memcache_network.id } resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.memcache_network.id + network = google_compute_network.memcache_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } @@ -67,6 +67,10 @@ resource "google_memcache_instance" "instance" { name = "test-instance" authorized_network = google_service_networking_connection.private_service_connection.network + labels = { + env = "test" + } + node_config { cpu_count = 1 memory_size_mb = 1024 @@ -129,6 +133,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `zones` - (Optional) Zones where memcache nodes should be provisioned. If not @@ -273,6 +280,13 @@ In addition to the arguments listed above, the following computed attributes are Output only. Published maintenance schedule. Structure is [documented below](#nested_maintenance_schedule). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `memcache_nodes` block contains: diff --git a/website/docs/r/ml_engine_model.html.markdown b/website/docs/r/ml_engine_model.html.markdown index fb0c3ed5342..526990922e0 100644 --- a/website/docs/r/ml_engine_model.html.markdown +++ b/website/docs/r/ml_engine_model.html.markdown @@ -106,6 +106,8 @@ The following arguments are supported: * `labels` - (Optional) One or more labels that you can add, to organize your models. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -123,6 +125,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/models/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/monitoring_dashboard.html.markdown b/website/docs/r/monitoring_dashboard.html.markdown index b3d0e0dd63b..97c32f85b99 100644 --- a/website/docs/r/monitoring_dashboard.html.markdown +++ b/website/docs/r/monitoring_dashboard.html.markdown @@ -114,6 +114,12 @@ The following arguments are supported: The JSON representation of a dashboard, following the format at https://cloud.google.com/monitoring/api/ref_v3/rest/v1/projects.dashboards. The representation of an existing dashboard can be found by using the [API Explorer](https://cloud.google.com/monitoring/api/ref_v3/rest/v1/projects.dashboards/get) + ~> **Warning:** Because this is represented as a JSON string, Terraform doesn't have underlying information to know + which fields in the string have defaults. To prevent permanent diffs from default values, Terraform will attempt to + suppress diffs where the value is returned in the JSON string but doesn't exist in the configuration. Consequently, + legitmate remove-only diffs will also be suppressed. For Terraform to detect the diff, key removals must also be + accompanied by a non-removal change (trivial or not). + - - - diff --git a/website/docs/r/network_connectivity_hub.html.markdown b/website/docs/r/network_connectivity_hub.html.markdown index 499d4270782..2b71a3bdc99 100644 --- a/website/docs/r/network_connectivity_hub.html.markdown +++ b/website/docs/r/network_connectivity_hub.html.markdown @@ -28,12 +28,11 @@ A basic test of a networkconnectivity hub resource "google_network_connectivity_hub" "primary" { name = "hub" description = "A sample hub" + project = "my-project-name" labels = { label-one = "value-one" } - - project = "my-project-name" } @@ -58,6 +57,8 @@ The following arguments are supported: * `labels` - (Optional) Optional labels in key:value format. For more information about labels, see [Requirements for labels](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements). + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) @@ -74,12 +75,18 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time the hub was created. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `routing_vpcs` - The VPC network associated with this hub's spokes. All of the VPN tunnels, VLAN attachments, and router appliance instances referenced by this hub's spokes must belong to this VPC network. This field is read-only. Network Connectivity Center automatically populates it based on the set of spokes attached to the hub. * `state` - Output only. The current lifecycle state of this hub. Possible values: STATE_UNSPECIFIED, CREATING, ACTIVE, DELETING +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `unique_id` - Output only. The Google-generated UUID for the hub. This value is unique across all hub resources. If a hub is deleted and another with the same name is created, the new hub is assigned a different unique_id. diff --git a/website/docs/r/network_connectivity_service_connection_policy.html.markdown b/website/docs/r/network_connectivity_service_connection_policy.html.markdown index 524544703ef..ed2b06d9b8d 100644 --- a/website/docs/r/network_connectivity_service_connection_policy.html.markdown +++ b/website/docs/r/network_connectivity_service_connection_policy.html.markdown @@ -101,6 +101,9 @@ The following arguments are supported: (Optional) User-defined labels. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -137,6 +140,13 @@ In addition to the arguments listed above, the following computed attributes are * `infrastructure` - The type of underlying resources used to create the connection. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `psc_connections` block contains: diff --git a/website/docs/r/network_connectivity_spoke.html.markdown b/website/docs/r/network_connectivity_spoke.html.markdown index dfddbc6529e..dced96fae82 100644 --- a/website/docs/r/network_connectivity_spoke.html.markdown +++ b/website/docs/r/network_connectivity_spoke.html.markdown @@ -154,6 +154,8 @@ The `instances` block supports: * `labels` - (Optional) Optional labels in key:value format. For more information about labels, see [Requirements for labels](https://cloud.google.com/resource-manager/docs/creating-managing-labels#requirements). + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `linked_interconnect_attachments` - (Optional) @@ -226,9 +228,15 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time the spoke was created. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `state` - Output only. The current lifecycle state of this spoke. Possible values: STATE_UNSPECIFIED, CREATING, ACTIVE, DELETING +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `unique_id` - Output only. The Google-generated UUID for the spoke. This value is unique across all spoke resources. If a spoke is deleted and another with the same name is created, the new spoke is assigned a different unique_id. diff --git a/website/docs/r/network_management_connectivity_test_resource.html.markdown b/website/docs/r/network_management_connectivity_test_resource.html.markdown index e54a4dc0ccd..d9e978bc5a8 100644 --- a/website/docs/r/network_management_connectivity_test_resource.html.markdown +++ b/website/docs/r/network_management_connectivity_test_resource.html.markdown @@ -52,6 +52,9 @@ resource "google_network_management_connectivity_test" "instance-test" { } protocol = "TCP" + labels = { + env = "test" + } } resource "google_compute_instance" "source" { @@ -294,6 +297,9 @@ The following arguments are supported: (Optional) Resource labels to represent user-provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -304,6 +310,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/locations/global/connectivityTests/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_security_address_group.html.markdown b/website/docs/r/network_security_address_group.html.markdown index f5d4a8b18d7..5186afeb04e 100644 --- a/website/docs/r/network_security_address_group.html.markdown +++ b/website/docs/r/network_security_address_group.html.markdown @@ -105,6 +105,9 @@ The following arguments are supported: Set of label tags associated with the AddressGroup resource. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `items` - (Optional) List of items. @@ -130,6 +133,13 @@ In addition to the arguments listed above, the following computed attributes are A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_security_authorization_policy.html.markdown b/website/docs/r/network_security_authorization_policy.html.markdown index 367b8940ad0..743a43cd31f 100644 --- a/website/docs/r/network_security_authorization_policy.html.markdown +++ b/website/docs/r/network_security_authorization_policy.html.markdown @@ -109,6 +109,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the AuthorizationPolicy resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -199,6 +201,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the AuthorizationPolicy was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_security_client_tls_policy.html.markdown b/website/docs/r/network_security_client_tls_policy.html.markdown index cf291ae7c0c..b7b3465e9f8 100644 --- a/website/docs/r/network_security_client_tls_policy.html.markdown +++ b/website/docs/r/network_security_client_tls_policy.html.markdown @@ -99,6 +99,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the ClientTlsPolicy resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -189,6 +191,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the ClientTlsPolicy was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_security_server_tls_policy.html.markdown b/website/docs/r/network_security_server_tls_policy.html.markdown index 45616db55d9..0dd74737716 100644 --- a/website/docs/r/network_security_server_tls_policy.html.markdown +++ b/website/docs/r/network_security_server_tls_policy.html.markdown @@ -134,6 +134,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the ServerTlsPolicy resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -250,6 +252,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the ServerTlsPolicy was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_edge_cache_keyset.html.markdown b/website/docs/r/network_services_edge_cache_keyset.html.markdown index 53cf0a7bcce..5ab5ce873af 100644 --- a/website/docs/r/network_services_edge_cache_keyset.html.markdown +++ b/website/docs/r/network_services_edge_cache_keyset.html.markdown @@ -113,6 +113,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the EdgeCache resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `public_key` - (Optional) @@ -171,6 +173,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/locations/global/edgeCacheKeysets/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_edge_cache_origin.html.markdown b/website/docs/r/network_services_edge_cache_origin.html.markdown index 97e2df79b50..52ec16ae2c9 100644 --- a/website/docs/r/network_services_edge_cache_origin.html.markdown +++ b/website/docs/r/network_services_edge_cache_origin.html.markdown @@ -172,6 +172,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the EdgeCache resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `protocol` - (Optional) @@ -355,6 +357,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/locations/global/edgeCacheOrigins/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_edge_cache_service.html.markdown b/website/docs/r/network_services_edge_cache_service.html.markdown index e8b85776cbd..150cf67b245 100644 --- a/website/docs/r/network_services_edge_cache_service.html.markdown +++ b/website/docs/r/network_services_edge_cache_service.html.markdown @@ -989,6 +989,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the EdgeCache resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `disable_quic` - (Optional) @@ -1052,6 +1054,13 @@ In addition to the arguments listed above, the following computed attributes are * `ipv6_addresses` - The IPv6 addresses associated with this service. Addresses are static for the lifetime of the service. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_endpoint_policy.html.markdown b/website/docs/r/network_services_endpoint_policy.html.markdown index a3ffcdbcb19..10a088a00c1 100644 --- a/website/docs/r/network_services_endpoint_policy.html.markdown +++ b/website/docs/r/network_services_endpoint_policy.html.markdown @@ -146,6 +146,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the TcpRoute resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -190,6 +192,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the TcpRoute was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_gateway.html.markdown b/website/docs/r/network_services_gateway.html.markdown index acced6b12fd..ec567f04b9b 100644 --- a/website/docs/r/network_services_gateway.html.markdown +++ b/website/docs/r/network_services_gateway.html.markdown @@ -252,6 +252,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the Gateway resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -325,6 +327,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the AccessPolicy was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_grpc_route.html.markdown b/website/docs/r/network_services_grpc_route.html.markdown index 4a2cac5f77c..b1fb27a9e1a 100644 --- a/website/docs/r/network_services_grpc_route.html.markdown +++ b/website/docs/r/network_services_grpc_route.html.markdown @@ -310,6 +310,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the GrpcRoute resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -342,6 +344,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the GrpcRoute was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_http_route.html.markdown b/website/docs/r/network_services_http_route.html.markdown index 10b98ed0690..ed6c0d08708 100644 --- a/website/docs/r/network_services_http_route.html.markdown +++ b/website/docs/r/network_services_http_route.html.markdown @@ -628,6 +628,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the HttpRoute resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -663,6 +665,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the HttpRoute was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_mesh.html.markdown b/website/docs/r/network_services_mesh.html.markdown index 203ebcb5d8b..e7ee443fd13 100644 --- a/website/docs/r/network_services_mesh.html.markdown +++ b/website/docs/r/network_services_mesh.html.markdown @@ -85,6 +85,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the Mesh resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -117,6 +119,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the Mesh was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_service_binding.html.markdown b/website/docs/r/network_services_service_binding.html.markdown index 13cda58e0a9..416ef962c53 100644 --- a/website/docs/r/network_services_service_binding.html.markdown +++ b/website/docs/r/network_services_service_binding.html.markdown @@ -88,6 +88,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the ServiceBinding resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -109,6 +111,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the ServiceBinding was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/network_services_tcp_route.html.markdown b/website/docs/r/network_services_tcp_route.html.markdown index 295a964a07e..5846cfaca63 100644 --- a/website/docs/r/network_services_tcp_route.html.markdown +++ b/website/docs/r/network_services_tcp_route.html.markdown @@ -308,6 +308,8 @@ The following arguments are supported: * `labels` - (Optional) Set of label tags associated with the TcpRoute resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `description` - (Optional) @@ -343,6 +345,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Time the TcpRoute was updated in UTC. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/notebooks_instance.html.markdown b/website/docs/r/notebooks_instance.html.markdown index ae9c3b36d54..900aad3600a 100644 --- a/website/docs/r/notebooks_instance.html.markdown +++ b/website/docs/r/notebooks_instance.html.markdown @@ -290,6 +290,9 @@ The following arguments are supported: Labels to apply to this instance. These can be later modified by the setLabels method. An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `tags` - (Optional) The Compute Engine tags to add to instance. @@ -407,6 +410,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - Instance update time. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/privateca_ca_pool.html.markdown b/website/docs/r/privateca_ca_pool.html.markdown index cbac47e43cb..483a24e4ade 100644 --- a/website/docs/r/privateca_ca_pool.html.markdown +++ b/website/docs/r/privateca_ca_pool.html.markdown @@ -186,6 +186,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -565,6 +568,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/locations/{{location}}/caPools/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/privateca_certificate.html.markdown b/website/docs/r/privateca_certificate.html.markdown index 7e6c7e3968f..4965a415078 100644 --- a/website/docs/r/privateca_certificate.html.markdown +++ b/website/docs/r/privateca_certificate.html.markdown @@ -467,6 +467,9 @@ The following arguments are supported: (Optional) Labels with user-defined metadata to apply to this resource. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `pem_csr` - (Optional) Immutable. A pem-encoded X.509 certificate signing request (CSR). @@ -839,12 +842,6 @@ In addition to the arguments listed above, the following computed attributes are * `pem_certificate_chain` - The chain that may be used to verify the X.509 certificate. Expected to be in issuer-to-root order according to RFC 5246. -* `pem_certificates` - - (Deprecated) - Required. Expected to be in leaf-to-root order according to RFC 5246. - - ~> **Warning:** `pem_certificates` is deprecated and will be removed in a future major release. Use `pem_certificate_chain` instead. - * `create_time` - The time that this resource was created on the server. This is in RFC3339 text format. @@ -853,6 +850,13 @@ In addition to the arguments listed above, the following computed attributes are Output only. The time at which this CertificateAuthority was updated. This is in RFC3339 text format. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `revocation_details` block contains: @@ -876,13 +880,6 @@ In addition to the arguments listed above, the following computed attributes are A structured description of the issued X.509 certificate. Structure is [documented below](#nested_x509_description). -* `config_values` - - (Output, Deprecated) - Describes some of the technical fields in a certificate. - Structure is [documented below](#nested_config_values). - - ~> **Warning:** `config_values` is deprecated and will be removed in a future release. Use `x509_description` instead. - * `public_key` - (Output) A PublicKey describes a public key. @@ -1244,118 +1241,6 @@ In addition to the arguments listed above, the following computed attributes are The value can be a hostname or a domain with a leading period (like `.example.com`) -The `config_values` block contains: - -* `key_usage` - - (Output) - Indicates the intended use for keys that correspond to a certificate. - Structure is [documented below](#nested_key_usage). - - -The `key_usage` block contains: - -* `base_key_usage` - - (Output) - Describes high-level ways in which a key may be used. - Structure is [documented below](#nested_base_key_usage). - -* `extended_key_usage` - - (Output) - Describes high-level ways in which a key may be used. - Structure is [documented below](#nested_extended_key_usage). - -* `unknown_extended_key_usages` - - (Output) - An ObjectId specifies an object identifier (OID). These provide context and describe types in ASN.1 messages. - Structure is [documented below](#nested_unknown_extended_key_usages). - - -The `base_key_usage` block contains: - -* `key_usage_options` - - (Output) - Describes high-level ways in which a key may be used. - Structure is [documented below](#nested_key_usage_options). - - -The `key_usage_options` block contains: - -* `digital_signature` - - (Output) - The key may be used for digital signatures. - -* `content_commitment` - - (Output) - The key may be used for cryptographic commitments. Note that this may also be referred to as "non-repudiation". - -* `key_encipherment` - - (Output) - The key may be used to encipher other keys. - -* `data_encipherment` - - (Output) - The key may be used to encipher data. - -* `key_agreement` - - (Output) - The key may be used in a key agreement protocol. - -* `cert_sign` - - (Output) - The key may be used to sign certificates. - -* `crl_sign` - - (Output) - The key may be used sign certificate revocation lists. - -* `encipher_only` - - (Output) - The key may be used to encipher only. - -* `decipher_only` - - (Output) - The key may be used to decipher only. - -The `extended_key_usage` block contains: - -* `server_auth` - - (Output) - Corresponds to OID 1.3.6.1.5.5.7.3.1. Officially described as "TLS WWW server authentication", though regularly used for non-WWW TLS. - -* `client_auth` - - (Output) - Corresponds to OID 1.3.6.1.5.5.7.3.2. Officially described as "TLS WWW client authentication", though regularly used for non-WWW TLS. - -* `code_signing` - - (Output) - Corresponds to OID 1.3.6.1.5.5.7.3.3. Officially described as "Signing of downloadable executable code client authentication". - -* `email_protection` - - (Output) - Corresponds to OID 1.3.6.1.5.5.7.3.4. Officially described as "Email protection". - -* `time_stamping` - - (Output) - Corresponds to OID 1.3.6.1.5.5.7.3.8. Officially described as "Binding the hash of an object to a time". - -* `ocsp_signing` - - (Output) - Corresponds to OID 1.3.6.1.5.5.7.3.9. Officially described as "Signing OCSP responses". - -The `unknown_extended_key_usages` block contains: - -* `obect_id` - - (Output) - Required. Describes how some of the technical fields in a certificate should be populated. - Structure is [documented below](#nested_obect_id). - - -The `obect_id` block contains: - -* `object_id_path` - - (Output) - An ObjectId specifies an object identifier (OID). These provide context and describe types in ASN.1 messages. - The `public_key` block contains: * `key` - diff --git a/website/docs/r/privateca_certificate_authority.html.markdown b/website/docs/r/privateca_certificate_authority.html.markdown index b8bbfb17697..90cb27a657e 100644 --- a/website/docs/r/privateca_certificate_authority.html.markdown +++ b/website/docs/r/privateca_certificate_authority.html.markdown @@ -685,6 +685,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -749,6 +752,13 @@ In addition to the arguments listed above, the following computed attributes are A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z". +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `access_urls` block contains: diff --git a/website/docs/r/privateca_certificate_template.html.markdown b/website/docs/r/privateca_certificate_template.html.markdown index 8871fcb55b9..6c223cc1212 100644 --- a/website/docs/r/privateca_certificate_template.html.markdown +++ b/website/docs/r/privateca_certificate_template.html.markdown @@ -45,10 +45,6 @@ resource "google_privateca_certificate_template" "primary" { } } - labels = { - label-two = "value-two" - } - passthrough_extensions { additional_extensions { object_id_path = [1, 6] @@ -107,6 +103,10 @@ resource "google_privateca_certificate_template" "primary" { } project = "my-project-name" + + labels = { + label-two = "value-two" + } } @@ -145,6 +145,8 @@ The `object_id` block supports: * `labels` - (Optional) Optional. Labels with user-defined metadata. + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `passthrough_extensions` - (Optional) @@ -353,6 +355,12 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - Output only. The time at which this CertificateTemplate was created. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + * `update_time` - Output only. The time at which this CertificateTemplate was updated. diff --git a/website/docs/r/pubsub_subscription.html.markdown b/website/docs/r/pubsub_subscription.html.markdown index 91b2113e4dc..8ee437f6d10 100644 --- a/website/docs/r/pubsub_subscription.html.markdown +++ b/website/docs/r/pubsub_subscription.html.markdown @@ -324,6 +324,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this Subscription. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `bigquery_config` - (Optional) If delivery to BigQuery is used with this subscription, this field is used to configure it. @@ -618,6 +621,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/subscriptions/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/pubsub_topic.html.markdown b/website/docs/r/pubsub_topic.html.markdown index b2de12a4c0b..bbd8627f01f 100644 --- a/website/docs/r/pubsub_topic.html.markdown +++ b/website/docs/r/pubsub_topic.html.markdown @@ -134,6 +134,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this Topic. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `message_storage_policy` - (Optional) Policy constraining the set of Google Cloud Platform regions where @@ -192,6 +195,13 @@ In addition to the arguments listed above, the following computed attributes are * `id` - an identifier for the resource with format `projects/{{project}}/topics/{{name}}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/recaptcha_enterprise_key.html.markdown b/website/docs/r/recaptcha_enterprise_key.html.markdown index c8101e00532..71091af1504 100644 --- a/website/docs/r/recaptcha_enterprise_key.html.markdown +++ b/website/docs/r/recaptcha_enterprise_key.html.markdown @@ -33,15 +33,15 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_package_names = [] } - labels = { - label-one = "value-one" - } - project = "my-project-name" testing_options { testing_score = 0.8 } + + labels = { + label-one = "value-one" + } } @@ -57,15 +57,15 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_bundle_ids = [] } - labels = { - label-one = "value-one" - } - project = "my-project-name" testing_options { testing_score = 1 } + + labels = { + label-one = "value-one" + } } @@ -75,13 +75,14 @@ A minimal test of recaptcha enterprise key ```hcl resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-one" - labels = {} project = "my-project-name" web_settings { integration_type = "SCORE" allow_all_domains = true } + + labels = {} } @@ -91,12 +92,7 @@ A basic test of recaptcha enterprise key that can be used by websites ```hcl resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-one" - - labels = { - label-one = "value-one" - } - - project = "my-project-name" + project = "my-project-name" testing_options { testing_challenge = "NOCAPTCHA" @@ -109,6 +105,10 @@ resource "google_recaptcha_enterprise_key" "primary" { allowed_domains = [] challenge_security_preference = "USABILITY" } + + labels = { + label-one = "value-one" + } } @@ -118,12 +118,7 @@ A basic test of recaptcha enterprise key with score integration type that can be ```hcl resource "google_recaptcha_enterprise_key" "primary" { display_name = "display-name-one" - - labels = { - label-one = "value-one" - } - - project = "my-project-name" + project = "my-project-name" testing_options { testing_score = 0.5 @@ -135,6 +130,10 @@ resource "google_recaptcha_enterprise_key" "primary" { allow_amp_traffic = false allowed_domains = [] } + + labels = { + label-one = "value-one" + } } @@ -163,6 +162,8 @@ The following arguments are supported: * `labels` - (Optional) See [Creating and managing labels](https://cloud.google.com/recaptcha-enterprise/docs/labels). + +**Note**: This field is non-authoritative, and will only manage the labels present in your configuration. Please refer to the field `effective_labels` for all of the labels present on the resource. * `project` - (Optional) @@ -239,9 +240,15 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - The timestamp corresponding to the creation of this Key. +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + * `name` - The resource name for the Key in the format "projects/{project}/keys/{key}". +* `terraform_labels` - + The combination of labels configured directly on the resource and default labels configured on the provider. + ## Timeouts This resource provides the following diff --git a/website/docs/r/redis_instance.html.markdown b/website/docs/r/redis_instance.html.markdown index 74ece91dd9e..86d8f98e99a 100644 --- a/website/docs/r/redis_instance.html.markdown +++ b/website/docs/r/redis_instance.html.markdown @@ -134,7 +134,7 @@ resource "google_redis_instance" "cache-persis" { // If this network hasn't been created and you are using this example in your // config, add an additional network resource or change // this from "data"to "resource" -data "google_compute_network" "redis-network" { +resource "google_compute_network" "redis-network" { name = "redis-test-network" } @@ -143,11 +143,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.redis-network.id + network = google_compute_network.redis-network.id } resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.redis-network.id + network = google_compute_network.redis-network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } @@ -160,7 +160,7 @@ resource "google_redis_instance" "cache" { location_id = "us-central1-a" alternative_location_id = "us-central1-f" - authorized_network = data.google_compute_network.redis-network.id + authorized_network = google_compute_network.redis-network.id connect_mode = "PRIVATE_SERVICE_ACCESS" redis_version = "REDIS_4_0" @@ -310,6 +310,8 @@ The following arguments are supported: * `labels` - (Optional) Resource labels to represent user provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `redis_configs` - (Optional) @@ -562,6 +564,13 @@ In addition to the arguments listed above, the following computed attributes are Output only. The port number of the exposed readonly redis endpoint. Standard tier only. Write requests should target 'port'. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `maintenance_schedule` block contains: diff --git a/website/docs/r/secret_manager_secret.html.markdown b/website/docs/r/secret_manager_secret.html.markdown index 190b6358900..727bf2c2d62 100644 --- a/website/docs/r/secret_manager_secret.html.markdown +++ b/website/docs/r/secret_manager_secret.html.markdown @@ -133,12 +133,6 @@ The following arguments are supported: The `replication` block supports: -* `automatic` - - (Optional, Deprecated) - The Secret will automatically be replicated without any restrictions. - - ~> **Warning:** `automatic` is deprecated and will be removed in a future major release. Use `auto` instead. - * `auto` - (Optional) The Secret will automatically be replicated without any restrictions. @@ -206,6 +200,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `annotations` - (Optional) Custom metadata about the secret. @@ -285,6 +282,16 @@ In addition to the arguments listed above, the following computed attributes are * `create_time` - The time at which the Secret was created. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/service_directory_namespace.html.markdown b/website/docs/r/service_directory_namespace.html.markdown index 2a03b656ee5..16630471fb3 100644 --- a/website/docs/r/service_directory_namespace.html.markdown +++ b/website/docs/r/service_directory_namespace.html.markdown @@ -78,6 +78,9 @@ The following arguments are supported: labels can be associated with a given resource. Label keys and values can be no longer than 63 characters. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -92,6 +95,13 @@ In addition to the arguments listed above, the following computed attributes are The resource name for the namespace in the format `projects/*/locations/*/namespaces/*`. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/spanner_instance.html.markdown b/website/docs/r/spanner_instance.html.markdown index 3407690a20a..a43600ba10b 100644 --- a/website/docs/r/spanner_instance.html.markdown +++ b/website/docs/r/spanner_instance.html.markdown @@ -131,6 +131,9 @@ The following arguments are supported: An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `project` - (Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used. @@ -147,6 +150,13 @@ In addition to the arguments listed above, the following computed attributes are * `state` - Instance status: `CREATING` or `READY`. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/tpu_node.html.markdown b/website/docs/r/tpu_node.html.markdown index c9a83380ee3..329f47455ea 100644 --- a/website/docs/r/tpu_node.html.markdown +++ b/website/docs/r/tpu_node.html.markdown @@ -84,8 +84,8 @@ resource "google_tpu_node" "tpu" { } } -data "google_compute_network" "network" { - name = "default" +resource "google_compute_network" "network" { + name = "tpu-node-network" } resource "google_compute_global_address" "service_range" { @@ -93,11 +93,11 @@ resource "google_compute_global_address" "service_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 16 - network = data.google_compute_network.network.id + network = google_compute_network.network.id } resource "google_service_networking_connection" "private_service_connection" { - network = data.google_compute_network.network.id + network = google_compute_network.network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.service_range.name] } @@ -161,6 +161,8 @@ The following arguments are supported: * `labels` - (Optional) Resource labels to represent user provided metadata. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `zone` - (Optional) @@ -194,6 +196,13 @@ In addition to the arguments listed above, the following computed attributes are to the first (index 0) entry. Structure is [documented below](#nested_network_endpoints). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `network_endpoints` block contains: diff --git a/website/docs/r/vertex_ai_dataset.html.markdown b/website/docs/r/vertex_ai_dataset.html.markdown index a52dc7f4d37..ed3c84a70cb 100644 --- a/website/docs/r/vertex_ai_dataset.html.markdown +++ b/website/docs/r/vertex_ai_dataset.html.markdown @@ -41,6 +41,10 @@ resource "google_vertex_ai_dataset" "dataset" { display_name = "terraform" metadata_schema_uri = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" region = "us-central1" + + labels = { + env = "test" + } } ``` @@ -65,6 +69,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this Workflow. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `encryption_spec` - (Optional) Customer-managed encryption key spec for a Dataset. If set, this Dataset and all sub-resources of this Dataset will be secured by this key. @@ -100,6 +107,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The timestamp of when the dataset was last updated in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/vertex_ai_endpoint.html.markdown b/website/docs/r/vertex_ai_endpoint.html.markdown index 0434866872d..e4abec8401a 100644 --- a/website/docs/r/vertex_ai_endpoint.html.markdown +++ b/website/docs/r/vertex_ai_endpoint.html.markdown @@ -41,7 +41,7 @@ resource "google_vertex_ai_endpoint" "endpoint" { labels = { label-one = "value-one" } - network = "projects/${data.google_project.project.number}/global/networks/${data.google_compute_network.vertex_network.name}" + network = "projects/${data.google_project.project.number}/global/networks/${google_compute_network.vertex_network.name}" encryption_spec { kms_key_name = "kms-name" } @@ -51,7 +51,7 @@ resource "google_vertex_ai_endpoint" "endpoint" { } resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id + network = google_compute_network.vertex_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.vertex_range.name] } @@ -61,10 +61,10 @@ resource "google_compute_global_address" "vertex_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 24 - network = data.google_compute_network.vertex_network.id + network = google_compute_network.vertex_network.id } -data "google_compute_network" "vertex_network" { +resource "google_compute_network" "vertex_network" { name = "network-name" } @@ -105,6 +105,8 @@ The following arguments are supported: * `labels` - (Optional) The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `encryption_spec` - (Optional) @@ -151,6 +153,13 @@ In addition to the arguments listed above, the following computed attributes are * `model_deployment_monitoring_job` - Output only. Resource name of the Model Monitoring job associated with this Endpoint if monitoring is enabled by CreateModelDeploymentMonitoringJob. Format: `projects/{project}/locations/{location}/modelDeploymentMonitoringJobs/{model_deployment_monitoring_job}` +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `deployed_models` block contains: diff --git a/website/docs/r/vertex_ai_featurestore.html.markdown b/website/docs/r/vertex_ai_featurestore.html.markdown index ae0f7d0d4d8..d547167d6fd 100644 --- a/website/docs/r/vertex_ai_featurestore.html.markdown +++ b/website/docs/r/vertex_ai_featurestore.html.markdown @@ -108,6 +108,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this Featurestore. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `online_serving_config` - (Optional) Config for online serving resources. @@ -174,6 +177,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The timestamp of when the featurestore was last updated in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/vertex_ai_featurestore_entitytype.html.markdown b/website/docs/r/vertex_ai_featurestore_entitytype.html.markdown index 425e83c633a..e2cd78b629d 100644 --- a/website/docs/r/vertex_ai_featurestore_entitytype.html.markdown +++ b/website/docs/r/vertex_ai_featurestore_entitytype.html.markdown @@ -141,6 +141,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this EntityType. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `monitoring_config` - (Optional) The default monitoring configuration for all Features under this EntityType. @@ -240,6 +243,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The timestamp of when the featurestore was last updated in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/vertex_ai_featurestore_entitytype_feature.html.markdown b/website/docs/r/vertex_ai_featurestore_entitytype_feature.html.markdown index 491a099200a..1c75bd1726c 100644 --- a/website/docs/r/vertex_ai_featurestore_entitytype_feature.html.markdown +++ b/website/docs/r/vertex_ai_featurestore_entitytype_feature.html.markdown @@ -147,6 +147,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to the feature. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `description` - (Optional) Description of the feature. @@ -167,6 +170,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The timestamp when the entity type was most recently updated in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/vertex_ai_index.html.markdown b/website/docs/r/vertex_ai_index.html.markdown index cbfbbac63fc..dddaa3535ed 100644 --- a/website/docs/r/vertex_ai_index.html.markdown +++ b/website/docs/r/vertex_ai_index.html.markdown @@ -141,6 +141,8 @@ The following arguments are supported: * `labels` - (Optional) The labels with user-defined metadata to organize your Indexes. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `index_update_method` - (Optional) @@ -273,6 +275,13 @@ In addition to the arguments listed above, the following computed attributes are Stats of the index resource. Structure is [documented below](#nested_index_stats). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + The `deployed_indexes` block contains: diff --git a/website/docs/r/vertex_ai_index_endpoint.html.markdown b/website/docs/r/vertex_ai_index_endpoint.html.markdown index 19cfb33ab7d..dcaded22c16 100644 --- a/website/docs/r/vertex_ai_index_endpoint.html.markdown +++ b/website/docs/r/vertex_ai_index_endpoint.html.markdown @@ -42,14 +42,14 @@ resource "google_vertex_ai_index_endpoint" "index_endpoint" { labels = { label-one = "value-one" } - network = "projects/${data.google_project.project.number}/global/networks/${data.google_compute_network.vertex_network.name}" + network = "projects/${data.google_project.project.number}/global/networks/${google_compute_network.vertex_network.name}" depends_on = [ google_service_networking_connection.vertex_vpc_connection ] } resource "google_service_networking_connection" "vertex_vpc_connection" { - network = data.google_compute_network.vertex_network.id + network = google_compute_network.vertex_network.id service = "servicenetworking.googleapis.com" reserved_peering_ranges = [google_compute_global_address.vertex_range.name] } @@ -59,10 +59,10 @@ resource "google_compute_global_address" "vertex_range" { purpose = "VPC_PEERING" address_type = "INTERNAL" prefix_length = 24 - network = data.google_compute_network.vertex_network.id + network = google_compute_network.vertex_network.id } -data "google_compute_network" "vertex_network" { +resource "google_compute_network" "vertex_network" { name = "network-name" } @@ -109,6 +109,8 @@ The following arguments are supported: * `labels` - (Optional) The labels with user-defined metadata to organize your Indexes. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `network` - (Optional) @@ -150,6 +152,13 @@ In addition to the arguments listed above, the following computed attributes are * `public_endpoint_domain_name` - If publicEndpointEnabled is true, this field will be populated with the domain name to use for this index endpoint. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/vertex_ai_tensorboard.html.markdown b/website/docs/r/vertex_ai_tensorboard.html.markdown index d97371a713d..24247b11344 100644 --- a/website/docs/r/vertex_ai_tensorboard.html.markdown +++ b/website/docs/r/vertex_ai_tensorboard.html.markdown @@ -105,6 +105,9 @@ The following arguments are supported: (Optional) The labels with user-defined metadata to organize your Tensorboards. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `region` - (Optional) The region of the tensorboard. eg us-central1 @@ -141,6 +144,13 @@ In addition to the arguments listed above, the following computed attributes are * `update_time` - The timestamp of when the Tensorboard was last updated in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/workflows_workflow.html.markdown b/website/docs/r/workflows_workflow.html.markdown index f6dcfe7ac02..18a7eabaa4b 100644 --- a/website/docs/r/workflows_workflow.html.markdown +++ b/website/docs/r/workflows_workflow.html.markdown @@ -47,6 +47,9 @@ resource "google_workflows_workflow" "example" { region = "us-central1" description = "Magic" service_account = google_service_account.test_account.id + labels = { + env = "test" + } source_contents = <<-EOF # This is a sample workflow. You can replace it with your source code. # @@ -99,6 +102,9 @@ The following arguments are supported: (Optional) A set of key/value label pairs to assign to this Workflow. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. + * `service_account` - (Optional) Name of the service account associated with the latest workflow version. This service @@ -146,6 +152,13 @@ In addition to the arguments listed above, the following computed attributes are * `revision_id` - The revision of the workflow. A new one is generated if the service account or source contents is changed. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/workstations_workstation.html.markdown b/website/docs/r/workstations_workstation.html.markdown index 5000d62f8fd..c654549a63c 100644 --- a/website/docs/r/workstations_workstation.html.markdown +++ b/website/docs/r/workstations_workstation.html.markdown @@ -137,6 +137,8 @@ The following arguments are supported: * `labels` - (Optional) Client-specified labels that are applied to the resource and that are also propagated to the underlying Compute Engine resources. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `annotations` - (Optional) @@ -173,6 +175,16 @@ In addition to the arguments listed above, the following computed attributes are * `state` - Current state of the workstation. +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + ## Timeouts diff --git a/website/docs/r/workstations_workstation_cluster.html.markdown b/website/docs/r/workstations_workstation_cluster.html.markdown index 0a1e736bd6d..4ea79bdec6e 100644 --- a/website/docs/r/workstations_workstation_cluster.html.markdown +++ b/website/docs/r/workstations_workstation_cluster.html.markdown @@ -147,6 +147,8 @@ The following arguments are supported: * `labels` - (Optional) Client-specified labels that are applied to the resource and that are also propagated to the underlying Compute Engine resources. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `display_name` - (Optional) @@ -219,6 +221,16 @@ In addition to the arguments listed above, the following computed attributes are Status conditions describing the current resource state. Structure is [documented below](#nested_conditions). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `conditions` block contains: diff --git a/website/docs/r/workstations_workstation_config.html.markdown b/website/docs/r/workstations_workstation_config.html.markdown index 1aa1add9493..d5321de2129 100644 --- a/website/docs/r/workstations_workstation_config.html.markdown +++ b/website/docs/r/workstations_workstation_config.html.markdown @@ -79,6 +79,13 @@ resource "google_workstations_workstation_config" "default" { running_timeout = "21600s" replica_zones = ["us-central1-a", "us-central1-b"] + annotations = { + label-one = "value-one" + } + + labels = { + "label" = "key" + } host { gce_instance { @@ -515,6 +522,8 @@ The following arguments are supported: * `labels` - (Optional) Client-specified labels that are applied to the resource and that are also propagated to the underlying Compute Engine resources. + **Note**: This field is non-authoritative, and will only manage the labels present in your configuration. + Please refer to the field `effective_labels` for all of the labels present on the resource. * `annotations` - (Optional) @@ -754,6 +763,16 @@ In addition to the arguments listed above, the following computed attributes are Status conditions describing the current resource state. Structure is [documented below](#nested_conditions). +* `terraform_labels` - + The combination of labels configured directly on the resource + and default labels configured on the provider. + +* `effective_labels` - + All of labels (key/value pairs) present on the resource in GCP, including the labels configured through Terraform, other clients and services. + +* `effective_annotations` - + All of annotations (key/value pairs) present on the resource in GCP, including the annotations configured through Terraform, other clients and services. + The `conditions` block contains: