Skip to content

Commit

Permalink
GCP account variables in cluster_vars; prefer merge_vars over include…
Browse files Browse the repository at this point in the history
…_vars (#76)

* GCP account variables in cluster_vars; prefer merge_vars over include_vars.
+ Allow GCP account credentials to be defined as a variable (rather than as a file that needs to be retrieved); simplifies Jenkins automation.
+ Default to preferring merge_vars rather than include_vars.  Update EXAMPLE/* to illustrate usage.  Fix to merge_vars.py to allow directories.
+ Create deprecate_str.py which allows user to output a deprecation warning on command line.
+ Fix DNS dig regex matching on 100 on '10.'

* Move GCP and AWS metadata into cluster_vars. Simplify gcp json data

* gcp_credentials_file fix

* Fix to show failure on block rescue
Fix for _scheme_rmvm_rmdisk_only to not assert on tidy.
Fixes for _scheme_rmvm_keepdisk_rollback to support canary properly
Add 'deploy' and 'testsuite' Jenkinsfiles.  Addresses (partially) #37

Co-authored-by: Dougal Seeley <[email protected]>
  • Loading branch information
dseeley and Dougal Seeley authored Jan 13, 2021
1 parent 3ce0832 commit 975ac7c
Show file tree
Hide file tree
Showing 37 changed files with 924 additions and 571 deletions.
1 change: 1 addition & 0 deletions EXAMPLE/Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ jmespath = "*"
dnspython = "*"
google-auth = "*"
google-api-python-client = "*"
apache-libcloud = "*"

[dev-packages]

Expand Down
61 changes: 17 additions & 44 deletions EXAMPLE/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,60 +11,31 @@ Contributions are welcome and encouraged. Please see [CONTRIBUTING.md](https://
+ Python >= 2.7


## Usage
This example depends on the [clusterverse](https://github.com/sky-uk/clusterverse) role. It can be collected automatically using `ansible-galaxy`, or you could reference it using git sub-branches. This example uses `ansible-galaxy`.

To import the [clusterverse](https://github.com/sky-uk/clusterverse) role into the current directory:
+ `ansible-galaxy install -r requirements.yml`


### Cluster Variables
One of the mandatory command-line variables is `clusterid`, which defines the name of the directory under `group_vars`, from which variable files will be imported.

#### group_vars/\<clusterid\>/cluster_vars.yml:
```
app_name: "nginx" # The name of the application cluster (e.g. 'couchbase', 'nginx'); becomes part of cluster_name.
app_class: "webserver" # The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn
cluster_vars:
region: ""
image: ""
...
<buildenv>:
hosttype_vars:
<hosttype>: {...}
...
```

Variables defined in here override defaults in `roles/clusterverse/_dependencies/defaults/main.yml`, and can be overriden by defining them on the command-line.

#### group_vars/\<clusterid\>/app_vars.yml:
Contains your application-specific variables

---
## Invocation examples: _deploy_, _scale_, _repair_
The `cluster.yml` sub-role immutably deploys a cluster from the config defined above. If it is run again it will do nothing. If the cluster_vars are changed (e.g. add a host), the cluster will reflect the new variables (e.g. a new host will be added to the cluster).

### AWS:
```
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected]
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] --tags=clusterverse_clean -e clean=_all_ -e release_version=v1.0.1
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] -e clean=_all_
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e cloud_type=aws -e region=eu-west-1 -e clusterid=test [email protected]
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e cloud_type=aws -e region=eu-west-1 -e clusterid=test [email protected] --tags=clusterverse_clean -e clean=_all_
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=test_aws_euw1 [email protected]
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=test_aws_euw1 [email protected] --tags=clusterverse_clean -e clean=_all_
```
### GCP:
```
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gcp_euw1 [email protected]
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gcp_euw1 [email protected] --tags=clusterverse_clean -e clean=_all_ -e release_version=v1.0.1
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=vtp_gcp_euw1 [email protected] -e clean=_all_
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=test -e cloud_type=gcp -e region=europe-west1 [email protected]
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=test -e cloud_type=gcp -e region=europe-west1 [email protected] --tags=clusterverse_clean -e clean=_all_
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=test_gcp_euw1 [email protected]
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster.yml -e buildenv=sandbox -e clusterid=test_gcp_euw1 [email protected] --tags=clusterverse_clean -e clean=_all_
```

### Mandatory command-line variables:
+ `-e clusterid=<vtp_aws_euw1>` - A directory named `clusterid` must be present in `group_vars`. Holds the parameters that define the cluster; enables a multi-tenanted repository.
+ `-e buildenv=<sandbox>` - The environment (dev, stage, etc), which must be an attribute of `cluster_vars` defined in `group_vars/<clusterid>/cluster_vars.yml`
+ `-e buildenv=<sandbox>` - The environment (dev, stage, etc), which must be an attribute of `cluster_vars` (i.e. `cluster_vars.{{build_env}}`)

### Optional extra variables:
+ `-e app_name=<nginx>` - Normally defined in `group_vars/<clusterid>/cluster_vars.yml`. The name of the application cluster (e.g. 'couchbase', 'nginx'); becomes part of cluster_name
+ `-e app_class=<proxy>` - Normally defined in `group_vars/<clusterid>/cluster_vars.yml`. The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn
+ `-e app_name=<nginx>` - Normally defined in `/cluster_defs/`. The name of the application cluster (e.g. 'couchbase', 'nginx'); becomes part of cluster_name
+ `-e app_class=<proxy>` - Normally defined in `/cluster_defs/`. The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn
+ `-e release_version=<v1.0.1>` - Identifies the application version that is being deployed.
+ `-e clean=[current|retiring|redeployfail|_all_]` - Deletes VMs in `lifecycle_state`, or `_all_`, as well as networking and security groups
+ `-e pkgupdate=[always|onCreate]` - Upgrade the OS packages (not good for determinism). `onCreate` only upgrades when creating the VM for the first time.
Expand All @@ -75,10 +46,11 @@ ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> cluster
+ `-e metricbeat_install=false` - Does not install metricbeat
+ `-e wait_for_dns=false` - Does not wait for DNS resolution
+ `-e create_gcp_network=true` - Create GCP network and subnetwork (probably needed if creating from scratch and using public network)
+ `-e debug_nested_log_output=true` - Show the log output from nested calls to embedded Ansible playbooks (i.e. when redeploying)

### Tags
+ `clusterverse_clean`: Deletes all VMs and security groups (also needs `-e clean=[current|retiring|redeployfail|_all_]` on command line)
+ `clusterverse_create`: Creates only EC2 VMs, based on the hosttype_vars values in group_vars/all/cluster.yml
+ `clusterverse_create`: Creates only EC2 VMs, based on the hosttype_vars values in `/cluster_defs/`
+ `clusterverse_config`: Updates packages, sets hostname, adds hosts to DNS


Expand All @@ -89,15 +61,16 @@ The `redeploy.yml` sub-role will completely redeploy the cluster; this is useful

### AWS:
```
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=vtp_aws_euw1 [email protected] -e canary=none
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=test_aws_euw1 [email protected] -e canary=none
ansible-playbook -u ubuntu --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e cloud_type=aws -e region=eu-west-1 -e clusterid=test [email protected] -e canary=none
```
### GCP:
```
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=vtp_gcp_euw1 [email protected] -e canary=none
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=test_aws_euw1 [email protected] -e canary=none
ansible-playbook -u <username> --private-key=/home/<user>/.ssh/<rsa key> redeploy.yml -e buildenv=sandbox -e clusterid=test -e cloud_type=gcp -e region=europe-west1 [email protected] -e canary=none
```

### Mandatory command-line variables:
+ `-e clusterid=<vtp_aws_euw1>` - A directory named `clusterid` must be present in `group_vars`. Holds the parameters that define the cluster; enables a multi-tenanted repository.
+ `-e buildenv=<sandbox>` - The environment (dev, stage, etc), which must be an attribute of `cluster_vars` defined in `group_vars/<clusterid>/cluster_vars.yml`
+ `-e canary=['start', 'finish', 'none', 'tidy']` - Specify whether to start or finish a canary deploy, or 'none' deploy

Expand Down
File renamed without changes.
31 changes: 31 additions & 0 deletions EXAMPLE/cluster_defs/aws/cluster_vars.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---

cluster_vars:
dns_cloud_internal_domain: "{{region}}.compute.internal" # The cloud-internal zone as defined by the cloud provider (e.g. GCP, AWS)
dns_nameserver_zone: &dns_nameserver_zone "" # The zone that dns_server will operate on. gcloud dns needs a trailing '.'. Leave blank if no external DNS (use IPs only)
dns_server: "" # Specify DNS server. nsupdate, route53 or clouddns. If empty string is specified, no DNS will be added.
route53_private_zone: no # Only used when cluster_vars.type == 'aws'. Defaults to true if not set.
assign_public_ip: "yes"
inventory_ip: "public" # 'public' or 'private', (private in case we're operating in a private LAN). If public, 'assign_public_ip' must be 'yes'
instance_profile_name: ""
user_data: |-
#cloud-config
system_info:
default_user:
name: ansible
ssh_whitelist: &ssh_whitelist ['10.0.0.0/8']
secgroups_existing: []
secgroup_new:
- proto: "tcp"
ports: ["22"]
cidr_ip: "{{_ssh_whitelist}}"
rule_desc: "SSH Access"
# - proto: all
# group_name: "{{cluster_name}}-sg"
# rule_desc: "Access from all VMs attached to the {{ cluster_name }}-sg group"
# - proto: "tcp"
# ports: ["{{ prometheus_node_exporter_port | default(9100) }}"]
# group_name: "{{buildenv}}-private-sg"
# rule_desc: "Prometheus instances attached to {{buildenv}}-private-sg can access the exporter port(s)."
_ssh_whitelist: *ssh_whitelist
_dns_nameserver_zone: *dns_nameserver_zone
4 changes: 4 additions & 0 deletions EXAMPLE/cluster_defs/aws/eu-west-1/cluster_vars.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---

cluster_vars:
image: "ami-055958ae2f796344b" # eu-west-1, 20.04, amd64, hvm-ssd, 20201210. Ubuntu images can be located at https://cloud-images.ubuntu.com/locator/
14 changes: 14 additions & 0 deletions EXAMPLE/cluster_defs/aws/eu-west-1/sandbox/cluster_vars.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---

cluster_vars:
sandbox:
aws_access_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
7669080460651349243347331538721104778691266429457726036813912140404310
aws_secret_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
7669080460651349243347331538721104778691266429457726036813912140404310
vpc_name: "test{{buildenv}}"
vpc_subnet_name_prefix: "{{buildenv}}-test-{{_region}}"
key_name: "test__id_rsa"
termination_protection: "no"
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
---

cluster_vars:
sandbox:
hosttype_vars:
sys:
auto_volumes: [ ]
flavor: t3a.nano
version: "{{sys_version | default('')}}"
vms_by_az: { a: 1, b: 1, c: 0 }

sysdisks2:
auto_volumes:
- { device_name: "/dev/sda1", mountpoint: "/", fstype: "ext4", "volume_type": "gp2", "volume_size": 12, encrypted: True, "delete_on_termination": true }
- { device_name: "/dev/sdf", mountpoint: "/media/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 1, encrypted: True, "delete_on_termination": true, perms: { owner: "root", group: "sudo", mode: "775" } }
- { device_name: "/dev/sdg", mountpoint: "/media/mysvc2", fstype: "ext4", "volume_type": "gp2", "volume_size": 1, encrypted: True, "delete_on_termination": true }
flavor: t3a.nano
version: "{{sysdisks_version | default('')}}"
vms_by_az: { a: 1, b: 1, c: 0 }

# sysdisks3:
# auto_volumes:
# - { device_name: "/dev/sdf", mountpoint: "/media/mysvc", fstype: "ext4", "volume_type": "gp2", "volume_size": 1, encrypted: True, "delete_on_termination": true }
# - { device_name: "/dev/sdg", mountpoint: "/media/mysvc2", fstype: "ext4", "volume_type": "gp2", "volume_size": 1, encrypted: True, "delete_on_termination": true }
# - { device_name: "/dev/sdh", mountpoint: "/media/mysvc3", fstype: "ext4", "volume_type": "gp2", "volume_size": 1, encrypted: True, "delete_on_termination": true }
# flavor: t3a.nano
# version: "{{sysdisks_version | default('')}}"
# vms_by_az: { a: 1, b: 0, c: 0 }
#
# hostnvme-multi:
# auto_volumes:
# - { device_name: "/dev/sdb", mountpoint: "/media/mysvc", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral0 }
# - { device_name: "/dev/sdc", mountpoint: "/media/mysvc2", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral1 }
# - { device_name: "/dev/sdf", mountpoint: "/media/mysvc8", fstype: "ext4", "volume_type": "gp2", "volume_size": 1, encrypted: True, "delete_on_termination": true }
# flavor: i3en.2xlarge
# version: "{{sys_version | default('')}}"
# vms_by_az: { a: 1, b: 0, c: 0 }
#
# hostnvme-lvm:
# auto_volumes:
# - { device_name: "/dev/sdb", mountpoint: "/media/data", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral0 }
# - { device_name: "/dev/sdc", mountpoint: "/media/data", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral1 }
# lvmparams: { vg_name: "vg0", lv_name: "lv0", lv_size: "+100%FREE" }
# flavor: i3en.2xlarge
# version: "{{sys_version | default('')}}"
# vms_by_az: { a: 1, b: 0, c: 0 }
#
# hosthdd-multi:
# auto_volumes:
# - { device_name: "/dev/sdb", mountpoint: "/media/mysvc", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral0 }
# - { device_name: "/dev/sdc", mountpoint: "/media/mysvc2", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral1 }
# - { device_name: "/dev/sdd", mountpoint: "/media/mysvc3", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral2 }
# flavor: d2.xlarge
# version: "{{sys_version | default('')}}"
# vms_by_az: { a: 1, b: 0, c: 0 }
#
# hosthdd-lvm:
# auto_volumes:
# - { device_name: "/dev/sdb", mountpoint: "/media/data", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral0 }
# - { device_name: "/dev/sdc", mountpoint: "/media/data", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral1 }
# - { device_name: "/dev/sdd", mountpoint: "/media/data", fstype: "ext4", "volume_type": "ephemeral", ephemeral: ephemeral2 }
# lvmparams: { vg_name: "vg0", lv_name: "lv0", lv_size: "+100%FREE" }
# flavor: d2.xlarge
# version: "{{sys_version | default('')}}"
# vms_by_az: { a: 1, b: 0, c: 0 }
62 changes: 62 additions & 0 deletions EXAMPLE/cluster_defs/cluster_vars.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---

redeploy_schemes_supported: ['_scheme_addallnew_rmdisk_rollback', '_scheme_addnewvm_rmdisk_rollback', '_scheme_rmvm_rmdisk_only', '_scheme_rmvm_keepdisk_rollback']

#redeploy_scheme: _scheme_addallnew_rmdisk_rollback
#redeploy_scheme: _scheme_addnewvm_rmdisk_rollback
#redeploy_scheme: _scheme_rmvm_rmdisk_only
#redeploy_scheme: _scheme_rmvm_keepdisk_rollback

app_name: "test" # The name of the application cluster (e.g. 'couchbase', 'nginx'); becomes part of cluster_name.
app_class: "test" # The class of application (e.g. 'database', 'webserver'); becomes part of the fqdn

beats_config:
filebeat:
# output_logstash_hosts: ["localhost:5044"] # The destination hosts for filebeat-gathered logs
# extra_logs_paths: # The array is optional, if you need to add more paths or files to scrape for logs
# - /var/log/myapp/*.log
metricbeat:
# output_logstash_hosts: ["localhost:5044"] # The destination hosts for metricbeat-gathered metrics
# diskio: # Diskio retrieves metrics for all disks partitions by default. When diskio.include_devices is defined, only look for defined partitions
# include_devices: ["sda", "sdb", "nvme0n1", "nvme1n1", "nvme2n1"]

## Vulnerability scanners - Tenable and/ or Qualys cloud agents:
cloud_agent:
# tenable:
# service: "nessusagent"
# debpackage: ""
# bin_path: "/opt/nessus_agent/sbin"
# nessus_key_id: ""
# nessus_group_id: ""
# proxy: {host: "", port: ""}
# qualys:
# service: "qualys-cloud-agent"
# debpackage: ""
# bin_path: "/usr/local/qualys/cloud-agent/bin"
# config_path: "/etc/default/qualys-cloud-agent"
# activation_id: ""
# customer_id: ""
# proxy: {host: "", port: ""}

## Bind configuration and credentials, per environment
bind9:
sandbox: {server: "", key_name: "", key_secret: ""}

cluster_name: "{{ app_name }}-{{ buildenv }}" # Identifies the cluster within the cloud environment

cluster_vars:
type: "{{cloud_type}}"
region: "{{region}}"
dns_cloud_internal_domain: "" # The cloud-internal zone as defined by the cloud provider (e.g. GCP, AWS)
dns_nameserver_zone: &dns_nameserver_zone "" # The zone that dns_server will operate on. gcloud dns needs a trailing '.'. Leave blank if no external DNS (use IPs only)
dns_user_domain: "{%- if _dns_nameserver_zone -%}{{cloud_type}}-{{region}}.{{app_class}}.{{buildenv}}.{{_dns_nameserver_zone}}{%- endif -%}" # A user-defined _domain_ part of the FDQN, (if more prefixes are required before the dns_nameserver_zone)
dns_server: "" # Specify DNS server. nsupdate, route53 or clouddns. If empty string is specified, no DNS will be added.
custom_tagslabels:
inv_resident_id: "myresident"
inv_proposition_id: "myproposition"
inv_environment_id: "{{buildenv}}"
inv_service_id: "{{app_class}}"
inv_cluster_id: "{{cluster_name}}"
inv_cluster_type: "{{app_name}}"
inv_cost_centre: "1234"
_dns_nameserver_zone: *dns_nameserver_zone
Loading

0 comments on commit 975ac7c

Please sign in to comment.