Terraform cloud builder
This builder can be used to run the terraform tool in the GCE. From the Hashicorp Terraform product page:
HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
To build this builder with the default version, run the following command in this directory.
$ gcloud builds submit --config=cloudbuild.yaml
Terraform stores state information about infrastructure it has provisioned. It uses this to plan out the delta between what your .tf files specifiy, and what's actually out there. This state can be stored in different ways by Terraform; it is configured via backends.
The default backend for Terraform is local, which will store state information the working directory in $ ./.terraform
. Most build platforms (including GCE) do not persist the working directory between builds. Losing this state information is no bueno.
There are a couple of options for managing Terraform state across builds:
In your build, you'll want to initialize terraform and refresh the local state. This is really not a good idea; it'll be slow and not multi-run safe (if multiple runs kick off concurrently, there'll be nastiness such as race conditions). local_backend is an example of this approach.
In your build, set up steps to manually fetch the state before running Terraform, then push it back up after Terraform is done. This will help by removing the need to init & refresh on every build; but will not address the concurrency issues.
This is probably what you want to do. You'll still need to set up your GCS storage, and you'll need to configure the backend in your tf configurations. Some backends (happily, the GCS one does!) support locking of the remote state. This helps address the concurrency issue. gcs_backend is an example of this approach.
To use this builder, your builder service account will need IAM permissions sufficient for the operations you want to perform. Adding the 'Kubernetes Engine Service Agent' role is sufficient for running the examples. Refer to the Google Cloud Platform IAM integration page for more info.
The article Managing GCP projects with terraform gives a good strategy for administering projects in GCP with Terraform. If you intend to go beyond the examples, strongly consider an approach that isolates service accounts by function. A service account that can do 'all the things' is risky.
This image can be run on any Docker host, without GCE. Why would you want to do this? It'll let you run Terraform locally, with no environment dependencies other than a Docker host installation. You can use the local Cloud Build for this; but if you're curious or have weird / advanced requirements (for example, if you want to run Terraform as a build step on another platform like Travis or Circle CI, and don't want to use Cloud Build, it is an option).
You'll need to:
- Provide a service account key file
- Mount your project directory at '/workspace' when you run docker
docker run -it --rm -e GCLOUD_SERVICE_KEY=${GCLOUD_SERVICE_KEY} \
--mount type=bind,source=$PWD,target=/workspace \
--workdir="/workspace" \
ekgf/terraform <command>