Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform loses disk in state after resize #557

Open
sorinpad opened this issue Jun 26, 2024 · 5 comments
Open

Terraform loses disk in state after resize #557

sorinpad opened this issue Jun 26, 2024 · 5 comments

Comments

@sorinpad
Copy link

Description

Resizing a VMs disks results in Terraform losing (removing) the last disk from the state file; the change itself - from Terraform's perspective - is successful. The disk itself is not destroyed, just that it's not part of Terraform's state any more, as subsequent plan operations will try to add the disk.

Terraform and Provider version

Terraform v1.6.6
on linux_amd64

Affected resources and data sources

opennebula_virtual_machine

Terraform configuration

resource "opennebula_virtual_machine" "vm" {
  name        = "testvm"
  cpu         = 1
  vcpu        = 1
  memory      = 1 * 1024
  group       = "oneadmin"
  permissions = "660"

  disk {
    image_id = 146
    size     = 30 * 1024
    target   = "vda"
  }

  graphics {
    listen = "0.0.0.0"
    type   = "VNC"
  }

  os {
    arch = "x86_64"
    boot = "disk0"
  }
  disk {
    image_id = 0
    size     = 20 * 1024
    target   = "vdb"
  }

  disk {
    image_id = 0
    size     = 20 * 1024
    target   = "vdc"
  }
  on_disk_change = "SWAP"

  # specify cluster (required) and host (optional)
  sched_requirements = "CLUSTER_ID=\"100\""

  # specify datastore
  sched_ds_requirements = "ID=\"106\""
  # opennebula_virtual_machine.vm timeouts
  timeouts {
    create = "5m"
    update = "5m"
    delete = "5m"
  }
}

terraform {
  required_providers {
    opennebula = {
      source  = "OpenNebula/opennebula"
      version = "1.4.0"
    }
  }
}

Expected behavior

Both data disks are successfully resized, per this output:

  # opennebula_virtual_machine.vm will be updated in-place
  ~ resource "opennebula_virtual_machine" "vm" {
        id                     = "289"
        name                   = "testvm"
        # (24 unchanged attributes hidden)

      ~ disk {
          ~ size                     = 10240 -> 20480
            # (8 unchanged attributes hidden)
        }
      ~ disk {
          ~ size                     = 10240 -> 20480
            # (8 unchanged attributes hidden)
        }

        # (4 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Actual behavior

Only the first data disk is resized, the second data disk is lost, as seen on subsequent terraform apply:

  # opennebula_virtual_machine.vm will be updated in-place
  ~ resource "opennebula_virtual_machine" "vm" {
        id                     = "289"
        name                   = "testvm"
        # (24 unchanged attributes hidden)

      + disk {
          + image_id = 0
          + size     = 20480
          + target   = "vdc"
        }

        # (5 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  • terraform apply
  • Update .tf file with new disk size
  • terraform apply
  • Observe first data disk resize while second is not
  • terraform apply
  • Terraform wants to add a new disk

Debug output

No response

Panic output

No response

Important factoids

No response

References

No response

Copy link

This issue is stale because it has been open for 30 days with no activity and it has not the 'status: confirmed' label or it is not in a milestone. Remove the 'status: stale' label or comment, or this will be closed in 5 days.

@sorinpad
Copy link
Author

Can't remove the stale label so I'll comment; sorry for 'bumping' but this seems like a problem and maybe it's just been missed 🙏

Copy link

This issue is stale because it has been open for 30 days with no activity and it has not the 'status: confirmed' label or it is not in a milestone. Remove the 'status: stale' label or comment, or this will be closed in 5 days.

@sorinpad
Copy link
Author

🙈 keeping it alive 🙏

Copy link

This issue is stale because it has been open for 30 days with no activity and it has not the 'status: confirmed' label or it is not in a milestone. Remove the 'status: stale' label or comment, or this will be closed in 5 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants