A no commitments Terraform/Tofu wrapper that provides build caching functionality. It also makes working with workspaces a breeze.
-
Run
terraform init
,terraform providers lock
,terraform plan
,conftest
andterraform apply
with a single command and with caching. -
Automatically inject init backend tfvars and plan tfvars based on config values.
-
Automatically load tfvar files based on workspace name.
-
Allow you to have multiple profiles in the config file so that you can have a different configs for multiple workspaces. For example, one profile for dev and another for prod, or one profile for North America and another for China.
Each profile uses a separate
TF_DATA_DIR
. This allows to work with multiple profiles pointing to different backends under the same directory without conflicts. -
Runs pre-apply checks for you. If you define for example,
conftest
checks, it will run them for you before trying to apply. -
Track the archs you want to use when running terraform provider lock.
-
Allow you to work on different workspaces in different terminals. It automatically uses the
TF_WORKSPACE
environment variable rather than selecting the workspace. -
Allows you to define stacks of components and their dependencies and orchestrate their deployment (and destruction in the reverse order).
-
Allows to define retries for stack components that fail due to race conditions.
-
TODO: Allow option to skip component in destroy if the state shows no resources. use
terraform state list
and count the output lines. -
TODO: Collect component exit codes and have a return code based on whether there were any changes in the stack or not.
-
TODO: Add post apply tasks with the ability to run a different task for success or error. For example, to check for the last change in the failing component and notify the owner, or to run a post apply test.
-
TODO: Ensure logs for parallel component runs are printed in serial. This one might be nice to combine with a progress indicator in the dag library.
-
TODO: Figure out how to invalidate plan cache after apply when running in target mode.
-
TODO: Recursive inspection of modified local sources of a local module within a local module.
-
TODO: Run terraform providers lock after init. Add
init --lock
option. -
TODO: Add option to skip component in stack if ws doesn’t exist.
-
TODO: Add progress indicator x out of y > this can be written to the argo workflow progress file: https://argo-workflows.readthedocs.io/en/latest/progress/
-
TODO: Automatically handle when Terraform is unable to write the state to the backend.
│ Error: Failed to persist state to backend │ │ The error shown above has prevented Terraform from writing the updated │ state to the configured backend. To allow for recovery, the state has been │ written to the file "errored.tfstate" in the current working directory. │ │ Running "terraform apply" again at this point will create a forked state, │ making it harder to recover. │ │ To retry writing this state, use the following command: │ terraform state push errored.tfstate
-
Install using homebrew:
brew tap DavidGamba/dgtools https://github.com/DavidGamba/dgtools brew install DavidGamba/dgtools/bt
NoteCompletion is auto setup for bash.
For
zsh
completions, an additional step is required, add the following to your.zshrc
:export ZSHELL="true" source "$(brew --prefix)/share/zsh/site-functions/dgtools.bt.zsh"
Upgrade with:
brew update brew upgrade bt
-
Install using go:
Install the binary into your
~/go/bin
:go install github.com/DavidGamba/dgtools/bt@latest
Then setup the completion.
For bash:
complete -o default -C bt bt
For zsh:
export ZSHELL="true" autoload -U +X compinit && compinit autoload -U +X bashcompinit && bashcompinit complete -o default -C bt bt
The config file must be saved in a file named .bt.cue
.
It will be searched from the current dir upwards.
Example:
package bt
config: {
default_terraform_profile: "default"
terraform_profile_env_var: "BT_TERRAFORM_PROFILE"
}
terraform_profile: {
default: {
binary_name: "terraform"
init: {
backend_config: ["backend.tfvars"]
}
plan: {
var_file: ["~/auth.tfvars"]
}
workspaces: {
enabled: true
dir: "envs"
}
pre_apply_checks: {
enabled: true
commands: [
{name: "conftest", command: ["conftest", "test", "$TERRAFORM_JSON_PLAN"]},
]
}
platforms: ["darwin_amd64", "darwin_arm64", "linux_amd64", "linux_arm64"]
}
}
See the schema for extra details.
-
(optional) Run
bt terraform init
to initialize your config. -
Run
bt terraform build
to generate a plan. Ifinit
wasn’t run, it will runinit
once and cache the run so further calls won’t runinit
again. -
Run
bt terraform build --lock
to ensure thatterraform providers lock
has run afterinit
with the list of archs provided in the config file. -
Run
bt terraform build --ic
to generate a plan again even when it detects there are no file changes. -
Run
bt terraform build --show
to view the generated plan. -
Run
bt terraform build --apply
to apply the generated plan.
After running bt terraform init
it will save a .tf.init
file.
After running bt terraform build
it will save a .tf.plan
or .tf.plan-<workspace>
file.
It will check the time stamp of the .tf.init
file and if it is newer than the .tf.plan
file, a new plan needs to be generated.
It will also compare the .tf.plan
file against any file changes in the current dir or any of the module dirs to determine if a new plan needs to be generated.
If pre_apply_checks
are enabled, it will run the checks specified by passing the rendered json plan to the command.
For example, conftest
policy checks.
After running terraform apply
it will save a .tf.apply
or .tf.apply-<workspace>
file.
It will use that file and compare it to the .tf.plan
time stamp to determine if the apply has already been made.
Given the config setting for backend_config
for init and var_file
for plan, it will automatically include those files to the command.
For example, running bt terraform init
with the example config file will be the same as running:
terraform init -backend-config backend.tfvars
In the same way, running bt terraform build
with the example config file will be the same as running:
terraform plan -out .tf.plan -var-file ~/auth.tfvars
Finally, running bt terraform build --apply
with the example config file will be the same as running:
terraform apply -input .tf.plan
Setting workspaces to enabled: true
in the config file will enable the workspace helpers.
What the helpers do is to assume any .tfvars
or .tfvars.json
file in the dir
folder is a workspace.
If a workspace has been selected, bt will automatically include the <dir>/<workspace>.tfvars
or <dir>/<workspace>.tfvars.json
file to the command.
If a workspace hasn’t been selected, passing the --ws
option will select the workspace by exporting the TF_WORKSPACE
environment variable and will add the corresponging <dir>/<workspace>.tfvars
or <dir>/<workspace>.tfvars.json
file to the command.
For example, running bt terraform build --ws=dev
with the example config file will be the same as running:
export TF_WORKSPACE=dev terraform plan -out .tf.plan -var-file ~/auth.tfvars -var-file envs/dev.tfvars
And then running bt terraform build --ws=dev --apply
:
export TF_WORKSPACE=dev terraform apply -input .tf.plan
Important
|
Because bt uses the TF_WORKSPACE environment variable rather than selecting the workspace,
it is possible to work with multiple workspaces at the same time on different terminals.
|
When using bt terraform workspace-select default
bt will automatically delete the .terraform/environment
file to ensure we can use the TF_WORKSPACE
environment variable safely.
When using bt terraform build
, pre apply checks get run automatically after a plan if they are enabled.
Pre apply check commands get the following Env vars exported:
-
CONFIG_ROOT
: The dir of the config file. -
TERRAFORM_JSON_PLAN
: The path to the rendered json plan. -
TERRAFORM_TXT_PLAN
: The path to the rendered txt plan. -
TF_WORKSPACE
: The current workspace or "default". -
BT_COMPONENT
: The current component name if running in stack mode or the basename of the current directory.
If pre-apply checks are enabled in the config file, they can be disabled for the current run using the --no-checks
option.
To run only the checks, use bt terraform checks
, combine it with the --ws
option to run the checks against the last generated plan for the given workspace.
Multiple terraform config profiles can be defined.
By default, the default
profile is used.
The default profile can be overridden with config.default_terraform_profile
in the config file.
To use a different profile, use the --profile
option or export the BT_TERRAFORM_PROFILE
environment variable.
The environment variable name itself can also be overridden to read an existing one in the environment.
For example, set config.terraform_profile_env_var
to AWS_PROFILE
and name your terraform profiles the same way you name your AWS profiles.
Each additional profile will have its own TF_DATA_DIR
and the terraform data will be saved under .terraform-<profile>/
.
The config.default_terraform_profile
will still use the default .terraform/
dir.
This allows to work with multiple profiles pointing to different backends under the same workspace directory without conflicts.
Hashicorp recently introduced their solution for deploying stacks of resources.
A stack is a collection of components that need to be deployed together to form a logical unit.
Instead of having a massive state file that contains all resources, you can split them into multiple smaller components. This split provides numerous benefits that I won’t get into here, however, these components require an orchestration layer to deploy them together and in the correct order.
bt provides a separate config file for defining stacks: bt-stacks.cue
-
The stack is composed of multiple different components.
-
Each component can be deployed to a different workspace but in general, they should have a consistent naming convention so that the workspace name can be auto-resolved from the stack name.
-
A stack can have multiple instances of the same component, that is, multiple workspaces of one component.
-
The stack definition allows for conditionally added components. Some regions or environments might not require certain components.
-
The stack config file defines 2 different constructs. One is the component definition where the component and its dependencies are defined. The other is the stack definition, where the workspaces that compose a given stack and its variables are defined.
-
Because component dependencies are tracked, stack builds run in parallel when possible.
-
Components can have variables defined in the stack config file, since these variables are passed after the workspace var files they have higher precedence and allow for stack specific overrides.
-
Components can define retries when they fail due to race conditions.
package bt_stacks
// Define the list of components
component: "networking": {}
component: "kubernetes": {
depends_on: ["networking"]
}
component: "node_groups": {
depends_on: ["kubernetes"]
}
component: "addons": {
depends_on: ["kubernetes"]
}
component: "dns": {
depends_on: ["kubernetes"]
retries: 3
}
component: "dev-rbac": {
path: "dev-rbac/terraform"
depends_on: ["kubernetes", "addons"]
}
// Create component groupings with additional variable definitions
_standard_cluster: {
"networking": component["networking"] & {
variables: [
{name: "subnet_size", value: "/28"},
]
}
"kubernetes": component["kubernetes"]
"node_groups": component["node_groups"]
"addons": component["addons"]
"dns": component["dns"] & {
variables: [
{name: "api_endpoint", value: "api.example.com"},
]
}
}
// Create a stack with a list of components
stack: "dev-us-west-2": {
id: string
components: [
for k, v in _standard_cluster {
[// switch
if k == "networking" {
v & {
workspaces: [
"\(id)-k8s",
]
}
},
if k == "node_groups" {
v & {
workspaces: [
"\(id)a",
"\(id)b",
"\(id)c",
]
}
},
v & {
workspaces: [id]
},
][0]
},
// Custom component that only applies to this stack
component["dev-rbac"] & {
workspaces: [id]
}
]
}
stack: "prod-us-west-2": {
id: string
components: [
for k, v in _standard_cluster {
[// switch
if k == "networking" {
v & {
workspaces: [
"\(id)-k8s",
]
}
},
if k == "node_groups" {
v & {
workspaces: [
"\(id)a",
"\(id)b",
"\(id)c",
]
}
},
v & {
workspaces: [id]
},
][0]
}
]
}
See the stack schema for extra details.
Run all plans in parallel:
bt stack build --id=dev-us-west-2
Run all plans in serial:
bt stack build --id=dev-us-west-2 --serial
Review/Show the plan output for all components:
bt stack build --id=dev-us-west-2 --show --serial
Apply the changes:
bt stack build --id=dev-us-west-2 --apply
Destroy (pass both --destroy
and --reverse
to destroy in reverse order):
bt stack build --id=dev-us-west-2 --reverse --destroy
Apply the destroy:
bt stack build --id=dev-us-west-2 --reverse --destroy --apply