Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupported path in fieldMask #20187

Open
jagada1010 opened this issue Nov 5, 2024 · 5 comments
Open

Unsupported path in fieldMask #20187

jagada1010 opened this issue Nov 5, 2024 · 5 comments

Comments

@jagada1010
Copy link

jagada1010 commented Nov 5, 2024

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to a user, that user is claiming responsibility for the issue.
  • Customers working with a Google Technical Account Manager or Customer Engineer can ask them to reach out internally to expedite investigation and resolution of this issue.

Terraform Version & Provider Version(s)

Terraform v6.9.0
on

  • provider registry.terraform.io/hashicorp/google v6.9.0
  • provider registry.terraform.io/hashicorp/google-beta v6.9.0

Affected Resource(s)

google_redis_cluster

Terraform Configuration

main.tf:
resource "google_redis_cluster" "redis_cluster" {
  project                     = var.project
  name                        = var.name
  shard_count                 = var.shard_count
  region                      = var.region
  replica_count               = var.replica_count
  transit_encryption_mode     = var.transit_encryption_mode
  authorization_mode          = var.authorization_mode
  node_type                   = var.node_type
  deletion_protection_enabled = var.deletion_protection_enabled
  redis_configs               = var.redis_configs

  dynamic "psc_configs" {
    for_each = var.network
    content {
      network = psc_configs.value
    }
  }

  dynamic "zone_distribution_config" {
    for_each = var.zone_distribution_config == null ? {} : { zone_distribution_config = var.zone_distribution_config }
    content {
      mode = zone_distribution_config.value.mode
      zone = zone_distribution_config.value.zone
    }
  }

  dynamic "maintenance_policy" {
    for_each = var.maintenance_policy == null ? {} : { maintenance_policy = var.maintenance_policy }
    content {
      dynamic "weekly_maintenance_window" {
        for_each = maintenance_policy.value.weekly_maintenance_window == null ? {} : { weekly_maintenance_window = maintenance_policy.value.weekly_maintenance_window }
        content {
          day = weekly_maintenance_window.value.day
          dynamic "start_time" {
            for_each = weekly_maintenance_window.value.start_time == null ? {} : { start_time = weekly_maintenance_window.value.start_time }
            content {
              hours   = start_time.value.hours
              minutes = start_time.value.minutes
              seconds = start_time.value.seconds
              nanos   = start_time.value.nanos
            }
          }
        }
      }
    }
  }

  timeouts {
    create = "60m"
  }

}}

variable.tf:
variable "maintenance_policy" {
  description = "Maintenance policy for a cluster"
  type = object({
    weekly_maintenance_window = optional(object({
      day = string
      start_time = object({
        hours   = optional(number)
        minutes = optional(number)
        seconds = optional(number)
        nanos   = optional(number)
      })
    }))
  })
  default = null
}


Runner file:
redis-cluster.tf:
module "redis_cluster" {
  source                      = "../Module"
  for_each                    = { for rediscluster in var.redis_cluster_config : rediscluster.name => rediscluster if rediscluster.name != null }
  name                        = each.key
  project                     = each.value.project
  region                      = each.value.region
  shard_count                 = each.value.shard_count
  replica_count               = each.value.replica_count
  transit_encryption_mode     = each.value.transit_encryption_mode
  authorization_mode          = each.value.authorization_mode
  network                     = each.value.network
  node_type                   = each.value.node_type
  deletion_protection_enabled = each.value.deletion_protection_enabled
  redis_configs               = each.value.redis_configs
  zone_distribution_config    = each.value.zone_distribution_config
  maintenance_policy          = each.value.maintenance_policy
}

variables.tf:
variable "redis_cluster_config" {
  type = list(object({
    name                    = string
    project                 = string
    region                  = string
    shard_count             = number
    replica_count           = number
    transit_encryption_mode = string
    authorization_mode      = string
    node_type               = optional(string)
    network                 = list(string)
    deletion_protection_enabled = optional(bool)
    redis_configs            = optional(map(string),{})
    zone_distribution_config = optional(object({
      mode = optional(string)
      zone = optional(string)
    }))
    maintenance_policy   = optional(object({
      weekly_maintenance_window = optional(object({
        day = string
        start_time = object({
          hours   = optional(number)
          minutes = optional(number)
          seconds = optional(number)
          nanos   = optional(number)
        })
      }))
    }))
  }))
  default = [{
    name                    = null
    project                 = null
    region                  = null
    node_type               = null
    shard_count             = 0
    replica_count           = 0
    transit_encryption_mode = null
    authorization_mode      = null
    network                 = []
    deletion_protection_enabled = true
    redis_configs      = {}
    zone_distribution_config = {
      mode = null
      zone = null
    }
    maintenance_policy = null
  }]
}



terraform.tfvars:
redis_cluster_config = [{  {
  name                    = "redis-poc-002"
  project                 = "****"
  region                  = "us-central1"
  shard_count             = 3
  replica_count           = 1
  transit_encryption_mode = "TRANSIT_ENCRYPTION_MODE_DISABLED"
  authorization_mode      = "AUTH_MODE_DISABLED"
  network                 = ["projects/*****/global/networks/composer-vpc"]
  node_type               = "REDIS_STANDARD_SMALL" #"REDIS_HIGHMEM_MEDIUM" #"REDIS_STANDARD_SMALL" 
    deletion_protection_enabled = false
  maintenance_policy = {
    weekly_maintenance_window = {
      day = "FRIDAY"
      start_time ={
        hours = 2
        minutes = 0
        seconds = 0
        nanos = 0
      }
    }
  }
  zone_distribution_config = { # forces replacement
    mode = "MULTI_ZONE"
  }
  }]





Debug Output

module.redis_cluster["redis-poc-002"].google_redis_cluster.redis_cluster: Modifying... [id=projects/prj-**-***-***-poc/locations/us-central1/clusters/redis-poc-002]
╷
│ Error: Error updating Cluster "projects/******/locations/us-central1/clusters/redis-poc-002": googleapi: Error 400: unsupported path in fieldMask: maintenance_policy. Allowed values are persistence_config, deletion_protection_enabled, maintenance_policy.weekly_maintenance_window, cross_cluster_replication_config, display_name, shard_count, replica_count, redis_configs, maintenance_window, maintenance_policy.deny_maintenance_periods, cluster_endpoints
│ Details:
│ [
│   {
│     "@type": "type.googleapis.com/google.rpc.BadRequest",
│     "fieldViolations": [
│       {
│         "field": "maintenance_policy"
│       }
│     ]
│   }
│ ]
│
│   with module.redis_cluster["redis-poc-002"].google_redis_cluster.redis_cluster,
│   on ..\Module\main.tf line 1, in resource "google_redis_cluster" "redis_cluster":
│    1: resource "google_redis_cluster" "redis_cluster" {


Expected Behavior

module.redis_cluster["redis-poc-002"].google_redis_cluster.redis_cluster will be updated in-place

~ resource "google_redis_cluster" "redis_cluster" {
id = "projects/******/locations/us-central1/clusters/redis-poc-002"
name = "redis-poc-002"
# (18 unchanged attributes hidden)

  ~ maintenance_policy {
        # (2 unchanged attributes hidden)

      ~ weekly_maintenance_window {
            # (2 unchanged attributes hidden)

          ~ start_time {
              ~ hours   = 2 -> 3
                # (3 unchanged attributes hidden)
            }
        }
    }

    # (3 unchanged blocks hidden)
}

Plan: 0 to add, 1 to change, 0 to destroy.

The hour in the maintenance policy's start time should have changed from 2 to 3 hours.
Same error with day argument in weekly_maintenance_window should have changed from FRIDAY to THURSDAY.

Actual Behavior

No change happened instead got this error :
module.redis_cluster["redis-poc-002"].google_redis_cluster.redis_cluster: Modifying... [id=projects/prj----poc/locations/us-central1/clusters/redis-poc-002]

│ Error: Error updating Cluster "projects/
****/locations/us-central1/clusters/redis-poc-002": googleapi: Error 400: unsupported path in fieldMask: maintenance_policy. Allowed values are persistence_config, deletion_protection_enabled, maintenance_policy.weekly_maintenance_window, cross_cluster_replication_config, display_name, shard_count, replica_count, redis_configs, maintenance_window, maintenance_policy.deny_maintenance_periods, cluster_endpoints
│ Details:
│ [
│ {
│ "@type": "type.googleapis.com/google.rpc.BadRequest",
│ "fieldViolations": [
│ {
│ "field": "maintenance_policy"
│ }
│ ]
│ }
│ ]

│ with module.redis_cluster["redis-poc-002"].google_redis_cluster.redis_cluster,
│ on ..\Module\main.tf line 1, in resource "google_redis_cluster" "redis_cluster":
│ 1: resource "google_redis_cluster" "redis_cluster" {

Steps to reproduce

  1. Create a resource with the module and runner file.
  2. Edit the maintenance policy argument values.
  3. terraform apply

Important Factoids

I am able to attach the maintenance policy during the creation of the Redis cluster but once it's created I can't add or edit the existing maintenance policy.

References

#20101

b/380246854

@jagada1010 jagada1010 added the bug label Nov 5, 2024
@github-actions github-actions bot added forward/review In review; remove label to forward service/redis-cluster labels Nov 5, 2024
@ggtisc ggtisc self-assigned this Nov 11, 2024
@ggtisc
Copy link
Collaborator

ggtisc commented Nov 11, 2024

Hi @jagada1010!

According to the error, the problem is that the maintenance_policy attribute of the google_redis_cluster resource is receiving an invalid value. According to the API documentation this argument is expecting an object and is being given a value = null. In terraform, JSON, TypeScript and JavaScript you can do this depending on the internal specifications, but the correct way is to return an empty object like this example:

maintenance_policy = {}

Could you try this configuration again instead of using maintenance_policy = null?

@jagada1010
Copy link
Author

Hi @ggtisc.

Thank you for your response. But I am still getting the same error.

@ggtisc
Copy link
Collaborator

ggtisc commented Nov 15, 2024

This seems more like a troubleshooting issue than a bug. I have changed the variables that you share for their values, simplifying the code no errors have been obtained. Below I share the used code:

resource "google_compute_network" "cn_20187" {
  name = "cn-20187"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "subnetwork_20187" {
  name          = "subnetwork-20187"
  ip_cidr_range = "10.0.0.248/29"
  region        = "us-central1"
  network       = google_compute_network.cn_20187.id
}

resource "google_network_connectivity_service_connection_policy" "network_connectivity_service_connection_policy_20187" {
  name = "network-connectivity-service-connection-policy-20187"
  location = "us-central1"
  service_class = "gcp-memorystore-redis"
  description   = "something"
  network = google_compute_network.cn_20187.id

  psc_config {
    subnetworks = [google_compute_subnetwork.subnetwork_20187.id]
  }
}

resource "google_redis_cluster" "redis_cluster_20187" {
  project                     = "my-project"
  name                        = "redis-cluster-20187"
  shard_count                 = 3
  region                      = "us-central1"
  replica_count               = 1
  transit_encryption_mode     = "TRANSIT_ENCRYPTION_MODE_DISABLED"
  authorization_mode          = "AUTH_MODE_DISABLED"
  node_type                   = "REDIS_STANDARD_SMALL"
  deletion_protection_enabled = false
  redis_configs               = {}

  psc_configs {
    network = google_compute_network.cn_20187.id
  }

  zone_distribution_config {
    mode = "MULTI_ZONE"
    zone = null
  }

 maintenance_policy {
    weekly_maintenance_window {
      day = "FRIDAY"
      
      start_time {
        hours   = 2
        minutes = 0
        seconds = 0
        nanos   = 0
      }
    }
  }

  timeouts {
    create = "60m"
  }
}

You could try again substituting the variables with their values ​​until you find the value(s) you are entering which is/are causing this error. For this I also suggest you review the documentation and confirm that each value meets the requirements for both, the terraform registry and the API.

@rickard-von-essen
Copy link

This is a real bug.

The problem is that this MMv1 template generates the field_mask code but only use the top level property changes, i.e. maintenance_policy instead of maintenance_policy.weekly_maintenance_window, maintenance_window, or maintenance_policy.deny_maintenance_periods.

@ggtisc
Copy link
Collaborator

ggtisc commented Nov 21, 2024

After some tries I can't replicate this issue. I'm forwarding this issue for a deep review and verify if it is an intermittent bug

@ggtisc ggtisc removed the forward/review In review; remove label to forward label Nov 21, 2024
@ggtisc ggtisc removed their assignment Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants