Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on sync_with_catalog behavior in provider v0.57.0 #761

Open
coterv opened this issue Aug 2, 2024 · 3 comments
Open

Clarification on sync_with_catalog behavior in provider v0.57.0 #761

coterv opened this issue Aug 2, 2024 · 3 comments

Comments

@coterv
Copy link

coterv commented Aug 2, 2024

Hi,

Regarding provider version v0.57.0, does the sync_with_catalog attribute apply when a property is added, modified, or removed from the Global Catalog during the next terraform apply execution?

From what I've observed, after adding, modifying, or removing a property from an existing Property Set in the Global Catalog, the subsequent terraform apply updates the Property Set in the Global Catalog but does not reimport the updated version into the Blueprint. It is only upon running a second terraform apply that the Property Set is re-imported.

Is this the expected behavior?

Additionally, I have verified in the terraform plan that the Global Catalog apstra_property_set resource is evaluated before the Blueprint apstra_datacenter_property_set resource. To ensure that all actions on the apstra_property_set resource are completed before performing actions on the apstra_datacenter_property_set resource, I have included a depends_on meta-argument in the apstra_datacenter_property_set resource, but I am still seeing the same results.

Thanks in advance.

@chrismarget-j
Copy link
Collaborator

chrismarget-j commented Aug 2, 2024

I think what you're experiencing is a side-effect of the time that terraform generates the plan vs. the time when terraform changes things in the global catalog.

If the apstra_datacenter_property_set resource matches the currently-deployed apstra_property_set (global catalog) resource when Terraform creates a plan, it will not calculate a diff because they're properly synced at the moment the plan is generated.

Only after the plan is applied (the global catalog item has been changed) will terraform recognize a deficiency in the apstra_datacenter_property_set.

This is the same sort of issue which causes a blueprint commit to lag blueprint changes: At the time the plan was generated, the blueprint was up-to-date!

You can force replacement with lifecycle { replace_triggered_by = [<something related to the global catalog item>] }, but that might be disruptive to the fabric.

I don't know of a update_triggered_by meta-argument, but it seems like that would be ideal.

The depends_on meta-argument definitely won't help: It impacts creation order only (which is probably covered by passing the catalog item ID anyway).

@chrismarget-j
Copy link
Collaborator

This is kind of interesting. Maybe update_triggered_by is a real feature?

@coterv
Copy link
Author

coterv commented Sep 17, 2024

Hi Chris,

Thanks for your explanation, it's clear that, as you pointed out, this behavior is intrinsic to Terraform. At the time of generating the plan, the apstra_datacenter_property_set resource matches the currently deployed apstra_property_set.

However, this creates issues when users need to perform the following actions in a single terraform apply:

  • Create a new property in the global catalog as part of an existing property set (or modify the value of an existing property).
  • Create a new configlet in the global catalog that invokes the new property (or modify an existing configlet to reference the new property).
  • Re-import the property set into the blueprint to include the new property (or updated property value).
  • Import the new configlet into the blueprint, which invokes the new property (or re-import an existing configlet that now references the new property).

This is a common scenario for our customer, so we can't ask them to keep splitting these actions into multiple terraform apply runs.

We looked into the update_triggered_by meta-argument you suggested, but it doesn't seem to exist in the Terraform version we're using (we couldn’t find any reference to it in the Terraform documentation either).

We ended up solving the issue by using replace_triggered_by and splitting the apstra_datacenter_property_set resource into two distinct resources:

  • same_project_property_sets → to import property sets created within the same Terraform project.
  • other_project_property_sets → to import property sets created in a parent separate Terraform project.

The key difference is that the replace_triggered_by clause exists only in the same_project_property_sets resource. This triggers a full property set replacement if its contents change in the global catalog for sets created within the same Terraform project.

For property sets created in other projects, this replacement isn't necessary, so the other_project_property_sets resource doesn't include that clause.

This approach allows us to manage both locally and remotely created property sets efficiently, each with the right conditions and lifecycle management.

Thanks again for your input!

# ----- Local variables // Module: blueprints

locals {
  
  # Mapping of each property set name with its corresponding ID
  # for those property sets created within the local project
  property_set_ids_from_local_project = {
    for property_set_name, property_set_data in module.design.property_sets :
    property_set_name => property_set_data.id
  }

  # Mapping of each property set name with its corresponding ID
  # for those property sets received from the parent projects
  property_set_ids_from_parent_project = merge([
    for project, project_data in var.parent_project_outputs : {
      for property_set_name, property_set_data in try(project_data.outputs.property_sets, {}) :
      property_set_name => property_set_data.id
    }
  ]...)

  property_set_ids = merge(
    local.property_set_ids_from_local_project,
    local.property_set_ids_from_parent_project
  )

  # Combine all the "imported_property_set" lists from every <bp>.yml file, and append the corresponding bp name to each list member.
  flattened_property_sets = flatten([
    for blueprint_key, blueprint_data in local.blueprints : [
      for property_set in try(blueprint_data.import_property_sets, []) :
      merge(
        { bp = blueprint_key },
        property_set
      )
    ]
  ])

}

# ----- Porperty set resources // Module: blueprints

resource "terraform_data" "update_property_sets_trigger" {
  # local.flattened_property_sets is a list, so we must now project it into a map
  # where each key is unique. We'll combine the bp and ps_name keys to
  # produce a single unique key per instance.
  for_each = can(local.flattened_property_sets) ? {
    for property_set in try(local.flattened_property_sets, null) :
    "${property_set.bp}.${property_set.name}" => module.design.property_sets[property_set.name]
    if contains(keys(local.property_set_ids_from_local_project), property_set.name)
  } : null
  input = each.value.data
}

resource "apstra_datacenter_property_set" "same_project_property_sets" {
  # local.flattened_property_sets is a list, so we must now project it into a map
  # where each key is unique. We'll combine the bp and ps_name keys to
  # produce a single unique key per instance.
  for_each = can(local.flattened_property_sets) ? {
    for property_set in try(local.flattened_property_sets, null) :
    "${property_set.bp}.${property_set.name}" => property_set
    if contains(keys(local.property_set_ids_from_local_project), property_set.name)
  } : null
  lifecycle {
    # Replace only if the contents of this specific property_set changes
    replace_triggered_by = [
      terraform_data.update_property_sets_trigger[each.key]
    ]
  }
  # blueprint_id      = apstra_datacenter_blueprint.blueprints[each.value.bp].id
  blueprint_id      = local.blueprint_ids[each.value.bp]
  id                = local.property_set_ids["${each.value.name}"]
  sync_with_catalog = try(each.value.sync_with_catalog, false)
}

resource "apstra_datacenter_property_set" "other_project_property_sets" {
  # local.flattened_property_sets is a list, so we must now project it into a map
  # where each key is unique. We'll combine the bp and ps_name keys to
  # produce a single unique key per instance.
  for_each = can(local.flattened_property_sets) ? {
    for property_set in try(local.flattened_property_sets, null) :
    "${property_set.bp}.${property_set.name}" => property_set
    if contains(keys(local.property_set_ids_from_parent_project), property_set.name)
  } : null
  # blueprint_id      = apstra_datacenter_blueprint.blueprints[each.value.bp].id
  blueprint_id      = local.blueprint_ids[each.value.bp]
  id                = local.property_set_ids["${each.value.name}"]
  sync_with_catalog = try(each.value.sync_with_catalog, false)
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants