-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug on ingress/egress dry_run/enforced resources #20519
Comments
Hi @maleksah! I'm trying to replicate this issue, but since I don't have access to your locals and variables I can't get the same result as you. This is my current code based on yours:
Could you share with us the lacking information and confirm us the correct terraform version you are using? Note that you can use examples like replacing the Note: @NickElliot, @roaks3 this looks similar to Sarah French issue. I leave this note for your review in case it can be helpful. |
Hello @ggtisc , Here is a simplified version of the code. Terraform Version & Provider Version(s)Terraform v1.8.5
Affected Resource(s)google_access_context_manager_ingress_policy Terraform Configurationlocals {
access_policy_id = "1234567890"
perimeter = "accessPolicies/${local.access_policy_id}/servicePerimeters/test_perimeter"
restricted_services = [
"storage.googleapis.com",
# "artifactregistry.googleapis.com",
]
enforced = false
vpc_sc_ingress = [{
ingress_from = {
identities = [
"user:[email protected]",
"serviceAccount:[email protected]"
]
sources = [
{
resource = null
access_level = "*"
}
]
}
ingress_to = {
resources = ["*"]
operations = [{
service_name = "*"
}]
}
}]
vpc_sc_egress = [
{
egress_from = {
identity_type = null
identities = [
"user:[email protected]",
"serviceAccount:[email protected]"
]
}
egress_to = [{
resources = ["*"]
operations = [{ service_name = "storage.googleapis.com", methods = ["google.storage.objects.get"] }]
}]
}
]
}
resource "google_access_context_manager_service_perimeter" "this" {
description = "VPC SC perimeter test"
parent = "accessPolicies/${local.access_policy_id}"
name = local.perimeter
title = "Test Perimeter"
perimeter_type = "PERIMETER_TYPE_REGULAR"
use_explicit_dry_run_spec = true
dynamic "spec" {
for_each = !local.enforced ? [1] : []
content {
restricted_services = local.restricted_services
resources = []
access_levels = []
vpc_accessible_services {
enable_restriction = true
allowed_services = ["RESTRICTED-SERVICES"]
}
}
}
dynamic "status" {
for_each = local.enforced ? [1] : []
content {
restricted_services = local.restricted_services
resources = []
access_levels = []
vpc_accessible_services {
enable_restriction = true
allowed_services = ["RESTRICTED-SERVICES"]
}
}
}
lifecycle {
ignore_changes = [
status[0].resources,
status[0].ingress_policies,
status[0].egress_policies,
spec[0].resources,
spec[0].ingress_policies,
spec[0].egress_policies,
]
}
}
resource "google_access_context_manager_service_perimeter_dry_run_ingress_policy" "this" {
for_each = !local.enforced ? {for v in local.vpc_sc_ingress : join("-", v.ingress_from.identities) => v} : {}
perimeter = google_access_context_manager_service_perimeter.this.name
ingress_from {
identities = each.value.ingress_from.identities
# sources {} # if left empty all sources
dynamic "sources" {
for_each = each.value.ingress_from.sources
content {
resource = sources.value.resource
access_level = sources.value.access_level
}
}
}
ingress_to {
resources = each.value.ingress_to.resources
dynamic "operations" {
for_each = each.value.ingress_to.operations
content {
service_name = operations.value.service_name
dynamic "method_selectors" {
for_each = operations.value.service_name != "*" ? operations.value.methods : []
content {
method = method_selectors.value
}
}
}
}
}
lifecycle {
create_before_destroy = true
}
}
resource "google_access_context_manager_service_perimeter_dry_run_egress_policy" "this" {
for_each = !local.enforced ? {
for v in local.vpc_sc_egress :
(length(v.egress_from.identities) > 0 ? join("-", v.egress_from.identities) : v.egress_from.identity_type) => v
} : {}
perimeter = google_access_context_manager_service_perimeter.this.name
egress_from {
identities = each.value.egress_from.identities
identity_type = each.value.egress_from.identity_type
}
dynamic "egress_to" {
for_each = each.value.egress_to
content {
resources = egress_to.value.resources
dynamic "operations" {
for_each = egress_to.value.operations
content {
service_name = operations.value.service_name
dynamic "method_selectors" {
for_each = operations.value.service_name != "*" ? operations.value.methods : []
content {
method = method_selectors.value
}
}
}
}
}
}
lifecycle {
create_before_destroy = true
}
} Steps to reproduce1- do "terraform apply" for this code 2- do "terraform plan", all good no changes for google_access_context_manager_service_perimeter_dry_run_ingress_policy and google_access_context_manager_service_perimeter_dry_run_egress_policy 3- add a service in the restricted services of the VPC SC perimeter with terraform. 4- Then if I do terraform plan without any other change, terraform try to create google_access_context_manager_service_perimeter_dry_run_ingress_policy and google_access_context_manager_service_perimeter_dry_run_egress_policy, even if they already exists for the perimeter (created at step 1) I observed that this is not working well when we have multiple identities in ingress/egress dry_run/enforced rules. This is the same behaviour for
Every time, there is a change on the parent VPC SC perimeter (on restricted services for example), if we do a plan after this change, terraform try to recreate ingress/egress dry_run/enforced rules even if they already exist (we see them on the UI). This is really impacting our use of VPC SC in our company. Can we have a fix ASAP please? Thanks |
Hi, i was also able to reproduce this issue with the latest 6.12.0 provider version. Steps the reproduce: 1.) Deploy Project, 20 Service Accounts and VPC-SC Perimeter with an Ingress Rule for Service Accounts from the project variable "billing_account" {}
variable "org_id" {}
variable "access_context_manager_policy_id" {}
locals {
restricted_services = ["storage.googleapis.com"]
}
resource "google_folder" "folder" {
display_name = "vpc-sc-test"
parent = "organizations/${var.org_id}"
deletion_protection = false
}
resource "random_string" "project_suffix" {
length = 5
special = false
upper = false
}
resource "google_project" "project" {
name = "VPC SC Test Project"
project_id = "vpc-sc-test-project-${random_string.project_suffix.result}"
folder_id = google_folder.folder.name
billing_account = var.billing_account
deletion_policy = "DELETE"
}
resource "random_string" "sa_suffix" {
count = 20
length = 6
special = false
upper = false
}
resource "google_service_account" "sa" {
count = 20
project = google_project.project.project_id
account_id = "service-account-${random_string.sa_suffix[count.index].result}"
}
resource "google_access_context_manager_service_perimeter" "service-perimeter" {
parent = "accessPolicies/${var.access_context_manager_policy_id}"
name = "accessPolicies/${var.access_context_manager_policy_id}/servicePerimeters/vpc_sc_test_001"
title = "vpc_sc_test_001"
status {
restricted_services = local.restricted_services
resources = ["projects/${google_project.project.number}"]
}
lifecycle {
ignore_changes = [status[0].ingress_policies, status[0].egress_policies]
}
}
resource "google_access_context_manager_service_perimeter_ingress_policy" "sa_ingress" {
perimeter = google_access_context_manager_service_perimeter.service-perimeter.name
ingress_from {
identities = [for i in range(20) : google_service_account.sa[i].member]
sources {
access_level = "*"
}
}
ingress_to {
resources = ["*"]
operations {
service_name = "*"
}
}
lifecycle {
create_before_destroy = true
}
} This first apply will work nicely and follow up runs will show no diff. 2.) Change the ...
locals {
restricted_services = ["storage.googleapis.com","bigquery.googleapis.com"]
}
...
4.) Without changing any code. Run |
I observed another random bug on the same resources:
I have multiple pipelines (more than 100), that are executed in parallel, adding VPC SC ingress/egress rules on the same VPC SC perimeter. Sometimes, the terraform apply is applied correctly but, when I go to the VPC SC perimeter, I don't see the ingress/egress rules added. I think, that parallel adding of ingress/egress rules on the same perimeter is not working well... Can you check please? This is very critical for my client. Thanks for your help! |
Created GoogleCloudPlatform/magic-modules#12572 to address this |
If the resources are in different terraform configurations then there is nothing stopping update calls from happening at the same time so the state may not be updated in the other TF config when it tries to apply its own updates. We are working on adding etags which will help prevent this case by failing the updates if they are not against the latest version of the resources. |
Assigned to @Charlesleonius because they appear to have a solution, but let me know if you still need support and I can pass to the oncall |
Community Note
Terraform Version & Provider Version(s)
Terraform v1.8.5
on darwin_arm64
Affected Resource(s)
google_access_context_manager_ingress_policy
google_access_context_manager_egress_policy
google_access_context_manager_service_perimeter_dry_run_ingress_policy
google_access_context_manager_service_perimeter_dry_run_egress_policy
Terraform Configuration
This is the VPC SC perimeter config.
Here, I ignore all ingress/egress rules in dry_run and enforced modes.
Ingress/egress rules are manged in another terraform stage.
Here is an example of the dry_run ingress policy
Debug Output
No response
Expected Behavior
No response
Actual Behavior
When we update something on the VPC SC perimeter, for example, I add an API on the restricted services and apply the terraform.
After that, when I do a plan on the ingress or egress policies, terraform try to create them again, even if they already exists on the perimeter (they were created with the same terraform).
If we do a terraform plan on the ingress/egress ressources without changing the VPC SC perimeter, it's working as expected.
The problem occurs only when we update the google_access_context_manager_service_perimeter resource (ex: add or remove an API in the restricted services), Then when we do a plan on ingress/egress policies, they try to create the policies again (they already been created with the same terraform).
Steps to reproduce
I see that terraform want to create the ingress/egress policies even if they already exist on the perimeter (created before with the same terraform).
Important Factoids
No response
References
No response
The text was updated successfully, but these errors were encountered: