Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ignoring destroy_cloud_config_vdi_after_boot doesn't work #338

Open
ghost opened this issue Jan 7, 2025 · 0 comments
Open

ignoring destroy_cloud_config_vdi_after_boot doesn't work #338

ghost opened this issue Jan 7, 2025 · 0 comments

Comments

@ghost
Copy link

ghost commented Jan 7, 2025

steps to reproduce

  • deploy 2 vms managed by same state file vm1 with destroy_cloud_config_vdi_after_boot=true and vm2 with destroy_cloud_config_vdi_after_boot=false both have power_state=running
  • manually shutdown the vm1 add power_state, destroy_cloud_config_vdi_after_boot to lifecycle ignore list
  • rerun the plan

this produces a error
│ Error: power_state must be `Running` when destroy_cloud_config_vdi_after_boot set to `true` │ │ with module.vm["vm1"].xenorchestra_vm.vm, │ on ../modules/xoa/vm/main.tf line 5, in resource "xenorchestra_vm" "vm": │ 5: resource "xenorchestra_vm" "vm" { │ ╵

Use case: if multiple vms are in the same state file and if user wants to keep some of these vms shutdown by ignoring power_state and destroy_cloud_config_vdi_after_boot.

I can understand why power_state needs to be set to running on first boot but this shouldn't checked on subsequent runs when ignoring destroy_cloud_config_vdi_after_boot.

module.tf

locals {
  gib = 1073741824
}

resource "xenorchestra_vm" "vm" {
    name_label                          = lower(var.vm_config.name)
    name_description                    = var.vm_config.description
    template                            = data.xenorchestra_template.template.id
    clone_type                          = var.vm_config.clone_type
    cpus                                = var.vm_config.cpus
    memory_max                          = var.vm_config.memory * local.gib
    affinity_host                       = data.xenorchestra_host.host.id
    auto_poweron                        = var.vm_config.auto_poweron
    hvm_boot_firmware                   = var.vm_config.firmware
    tags                                = var.vm_config.tags
    destroy_cloud_config_vdi_after_boot = false

    dynamic "disk" {
      for_each = var.vm_config.disks
      content {
        name_label       = lower("${ var.vm_config.name }-${ disk.value.name }")
        name_description = disk.value.description == "" ? disk.value.name : disk.value.description
        size             = disk.value.size * local.gib
        attached         = disk.value.attached
        sr_id            = data.xenorchestra_sr.sr[disk.key].id
      }
    }

    cdrom {
      id = data.xenorchestra_vdi.xen_tools.id
    }

    network {
      network_id       = data.xenorchestra_network.network.id
      expected_ip_cidr = var.vm_config.network.ipv4
    }

    cloud_config = templatefile("${path.module}/cloudinit/userdata.tftpl",
      {
        hostname        = var.vm_config.userdata.hostname == "" ? lower(var.vm_config.name) : lower(var.vm_config.userdata.hostname)
        user            = lower(var.vm_config.userdata.user)
        password_hash   = var.vm_config.userdata.password_hash 
        ssh_public_keys = var.vm_config.userdata.ssh_public_keys
      }
    )
    cloud_network_config = templatefile("${path.module}/cloudinit/networkdata.tftpl",
      {
        interface      = length(regexall("ubuntu|debian", var.vm_config.template)) > 0 ? "enX0" : "eth0"
        ipv4           = var.vm_config.network.ipv4
        gw4            = var.vm_config.network.gateway
        nameservers    = var.vm_config.network.nameservers
        search_domains = var.vm_config.network.search_domains
      }
    )

    timeouts {
      create = "20m"
    }
    lifecycle {
      ignore_changes = [ 
        cdrom,
        power_state,
        auto_poweron,
        destroy_cloud_config_vdi_after_boot
        ]
    }
}

vm.tf

module "vm" {
  for_each     = var.vms
  source       = "../modules/xoa/vm"
  xoa_address  = var.xoa_address
  xoa_user     = var.xoa_user
  xoa_password = var.xoa_password
  vm_config = {
    name        = "hlb-${each.value.role}-${var.env}-${format("%02d", each.value.instance)}"
    description = each.value.description
    template    = each.value.template
    pool        = local.pool
    host        = local.host
    cpus        = each.value.cpus
    memory      = each.value.memory
    disks       = each.value.disks
    userdata = {
      user            = var.user
      password_hash   = var.password_hash
      ssh_public_keys = [var.ssh_public_key]
    }
    network = {
      ipv4           = each.value.ipv4
      gateway        = each.value.gateway
      nameservers    = length(each.value.nameservers) == 0 ? var.nameservers : each.value.nameservers
      search_domains = ["${var.env}.${var.search_domain}"]
    }
    auto_poweron = each.value.auto_poweron
    wait_for_ip  = true
    firmware     = "uefi"
    tags         =  concat(each.value.tags, [var.env, each.value.role], slice(split("-",each.value.template), 0, 3))
  }
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

0 participants