Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Provider produced inconsistent final plan #15512

Closed
JoelClemence opened this issue Oct 6, 2020 · 7 comments
Closed

Error: Provider produced inconsistent final plan #15512

JoelClemence opened this issue Oct 6, 2020 · 7 comments
Labels
bug Addresses a defect in current functionality. service/batch Issues and PRs that pertain to the batch service.

Comments

@JoelClemence
Copy link

Terraform version

v0.13.2

Affected Resource(s)

  • aws_batch_compute_environment

Expected Behavior

Terraform apply to work with no issues

Actual behaviour

Having modified the related Launch Template apply failed with:

Error: Provider produced inconsistent final plan

When expanding the plan for
module.X.aws_batch_compute_environment.Y to
include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/aws" changed the planned action from
CreateThenDelete to DeleteThenCreate.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Steps to reproduce

  1. Create a Batch compute environment
  2. Apply
  3. Add launch template and assign to Compute Environment (deleting necessary Job Queue relationships to avoid Can't destroy aws_batch_compute_environment associated with an aws_batch_job_queue: Cannot delete, found existing JobQueue relationship #13221), apply again
  4. Amend Launch Template
  5. Apply
@ghost ghost added the service/batch Issues and PRs that pertain to the batch service. label Oct 6, 2020
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Oct 6, 2020
@anGie44 anGie44 added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Oct 6, 2020
@anGie44
Copy link
Contributor

anGie44 commented Oct 6, 2020

Hi @JoelClemence 👋 Thank you for reporting this issue. Given the error log you've included, it's possible this behavior is outside of the Terraform AWS Provider's control since Terraform core tends to handle create_before_destroy behaviors; however, to know forsure, do you mind sharing a little more of your configuration setup here (redacting anything as necessary)? As well, any trace/debug logs you can provide would be greatly appreciated (for reference: https://www.terraform.io/docs/internals/debugging.html)!

@anGie44 anGie44 added the waiting-response Maintainers are waiting on response from community or contributor. label Oct 6, 2020
@JoelClemence
Copy link
Author

Hi @anGie44

Thank you for getting back to me, this is kind of what I was thinking but the log suggested it was a provider issue, The debug log is quite vast so will try and extract useful parts from it -
deploy-portions.log.

The terraform definition of the compute environment (and launch template):

data aws_ssm_parameter latest-ecs-ami {
  name = "/aws/service/ecs/optimized-ami/amazon-linux-2/recommended"
}

resource aws_launch_template launch-template {
  name = "oid-run-detection-batch-launch-template-${var.tags["environment"]}"
  image_id = jsondecode(data.aws_ssm_parameter.latest-ecs-ami.value)["image_id"]
  block_device_mappings {
    device_name = "/dev/xvda"
    ebs {
      delete_on_termination = true
      volume_size           = 100
      volume_type           = "gp2"
    }
  }
  tags = var.tags
}

resource aws_batch_compute_environment run-detection {
  compute_environment_name = "oid-run-detection-batch-compute-environment-${var.tags["environment"]}"
  compute_resources {
    allocation_strategy = "BEST_FIT"
    min_vcpus           = 0
    max_vcpus           = 256
    instance_type = [
      "m4.4xlarge",
      "m5.4xlarge",
      "m5a.4xlarge"
    ]
    instance_role = aws_iam_instance_profile.ecs-instance-role.arn
    security_group_ids = [
      var.oid_security_group_id
    ]
    subnets = var.oid_vpc_private_subnets
    type    = "EC2"
    launch_template {
      launch_template_id = aws_launch_template.launch-template.id
      version            = aws_launch_template.launch-template.latest_version
    }
  }
  service_role = aws_iam_role.aws-batch-service-role.arn
  type         = "MANAGED"
  depends_on   = [aws_iam_role_policy_attachment.aws-batch-service-role]
}

I have tried tweaking the lifecycle configuration but this does not work either

@ghost ghost removed the waiting-response Maintainers are waiting on response from community or contributor. label Oct 7, 2020
@microbioticajon
Copy link

Hi All,

Sorry just noticed that this issue was related to batch. I have come across a similar problem making modifications to existing launch templates: See #15535

If the template is modified the compute_environment does not reflect this change.

@JoelClemence
Copy link
Author

Should have said AWS Provider version originally: 3.6.0.

Having tested upgrading to latest provider version and version of terraform, the issue changes to:

Error: : Object already exists
	status code: 409, request id: 

Suggesting the lifecycle behaviour of replacing Batch compute environment is not correct, however, there is not one specified by the terraform configuration - has the behaviour changed recently for replacing batch compute environments? I haven't observed this with older versions of terraform/AWS Provider.

@JoelClemence
Copy link
Author

JoelClemence commented Oct 8, 2020

Should have said AWS Provider version originally: 3.6.0.

Having tested upgrading to latest provider version and version of terraform, the issue changes to:

Error: : Object already exists
	status code: 409, request id: 

Suggesting the lifecycle behaviour of replacing Batch compute environment is not correct, however, there is not one specified by the terraform configuration - has the behaviour changed recently for replacing batch compute environments? I haven't observed this with older versions of terraform/AWS Provider.

Having followed the logs, it tries to create the batch compute environment with the old one still in place (the delete should have occurred before)

Also I have tried adding a lifecycle configuration to force delete before create:

  lifecycle {
    create_before_destroy = false
  }

But this does not work either.

@bflad
Copy link
Contributor

bflad commented Oct 27, 2020

Hi @JoelClemence 👋 Thank you for reporting this and sorry you ran into trouble here. With the given error message, I believe this was fixed upstream in Terraform CLI by hashicorp/terraform#26192, which was released in Terraform 0.13.3. If you are still running into this same error after upgrading Terraform CLI itself, please open an issue upstream in https://github.com/hashicorp/terraform/issues since the changed the planned action from CreateThenDelete to DeleteThenCreate error cannot be fixed in Terraform Providers (see referenced pull request for more details).

@bflad bflad closed this as completed Oct 27, 2020
@ghost
Copy link

ghost commented Nov 26, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Nov 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/batch Issues and PRs that pertain to the batch service.
Projects
None yet
Development

No branches or pull requests

4 participants