-
Notifications
You must be signed in to change notification settings - Fork 2
Error Codes
The script running eksctl
failed to create the EKS cluster. There are many reasons why eksctl
might fail, and the exact cause is likely to be found in the Octopus task verbose logs.
Common issues relate to AWS permissions. The eksctl
tool lists the minimum permissions required to deploy a cluster in their documentation, so ensure the account associated with the AWS access key entered in the App Builder wizard have these permissions.
A sample IAM policy can be found here.
You may also find that certain regions don't have capacity to create a new cluster. Errors like Cannot create cluster 'app-builder-mcasperson-development' because us-east-1e, the targeted availability zone, does not currently have sufficient capacity to support the cluster
can be resolved by creating the cluster in a different region.
This error can also be displayed because a half complete CloudFormation stack already exists. You may need to manually delete any existing CloudFormation stacks and try again.
Once the AWS account has been updated with the correct permissions, manually rerun the Octopus deployment project.
The GitHub Actions workflow failed to create an S3 bucket to hold the Terraform state files. This is likely due to AWS IAM permissions errors. See the AWS documentation for more details on the permissions required to create an S3 bucket.
A sample IAM policy can be found here.
Once the AWS account has been updated with the correct permissions, manually rerun the GitHub Actions workflow.
The GitHub Actions workflow failed to create an ECR repository. This is likely due to AWS IAM permissions errors. See the AWS documentation for more information on creating ECR repository.
A sample IAM policy can be found here.
Once the AWS account has been updated with the correct permissions, manually rerun the GitHub Actions workflow.
The GitHub Actions workflow failed to create an Octopus space. This is likely due to Octopus permissions errors. Ensure the account associated with the API key entered in the App Builder wizard must have the CreateSpace permission.
If you see the error octopus deploy api returned an error on endpoint ***/api
, ensure the API key is valid and has not expired.
To update the API key, generate a new one and copy it into the OCTOPUS_APIKEY
GitHub Actions secret.
Once the Octopus account has been updated with the correct permissions, or a new key has been saved into GitHub Actions, manually rerun the GitHub Actions workflow.
The GitHub Actions workflow failed to populate an Octopus space. This may be an issue with the Octopus Terraform provider.
The GitHub Actions workflow failed to populate an Octopus space. This may be an issue with the Octopus Terraform provider.
In order to deploy the ECS service, we must know the subnets and security groups associated with the ECS cluster created with esc-cli
using the instructions at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html.
This error indicates that one or more of those resources could not be found. Possible causes are:
- The resources were deleted.
- The AWS account does not have the correct permissions to list subnets, vpcs, or security groups.
- The tags that were expected to be found on these resources (specifically the
tag:aws:cloudformation:stack-name
tag) were deleted. - The CloudFormation stack created by
ecs-cli
was deleted, but did not clean up the ECS cluster. This leaves the infrastructure in an inconsistent state where the cluster exists but the VPCs or subnets do not.
You may try deleting the ECS cluster with the ecs-cli down
command and rebuild the cluster by rerunning the ECS Cluster
project.
This error indicates that the ECS cluster was not successfully created. You can find more details in the Octopus verbose logs.
The likely cause is permissions issues. Ensure the user associated with the AWS access key has permission to create a new cluster with ecs-cli
. The required permissions are documented here.
The ECS cluster is created via CloudFormation. You may be able to find more information about why the cluster failed in the CloudFormation events. In the screenshot below you can see an example where the cluster failed due to insufficient permissions: