Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(research): Create staging-1 server with auto wiping database #1530

Merged
merged 2 commits into from
Nov 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 10 additions & 4 deletions deploy/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -57,16 +57,22 @@ prod:
--profile si \
up --detach

prod-service:
staging:
GATEWAY=$(shell $(MAKEPATH)/scripts/gateway.sh) \
docker-compose \
-f $(MAKEPATH)/docker-compose.yml \
-f $(MAKEPATH)/docker-compose.env-static.yml \
-f $(MAKEPATH)/docker-compose.pganalyze.yml \
-f $(MAKEPATH)/docker-compose.prod.yml \
--profile si \
-f $(MAKEPATH)/docker-compose.staging.yml \
--profile si-watchtower \
up

guinea:
GATEWAY=$(shell $(MAKEPATH)/scripts/gateway.sh) \
docker-compose \
-f $(MAKEPATH)/docker-compose.yml \
--profile guinea \
up guinea

web: init
# REPOPATH=$(REPOPATH) $(MAKEPATH)/scripts/check-for-artifacts-before-mounting.sh
$(MAKEPATH)/scripts/generate-ci-yml.sh $(CI_FROM_REF) $(CI_TO_REF)
Expand Down
7 changes: 7 additions & 0 deletions deploy/docker-compose.staging.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
version: "3"

services:
pg:
environment:
- PGA_SYSTEM_ID=staging
23 changes: 21 additions & 2 deletions deploy/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ services:
- "faktory:${GATEWAY:-I like my butt}"
labels:
- "com.centurylinklabs.watchtower.enable=true"
- "com.centurylinklabs.watchtower.lifecycle.pre-update='/reset-database.sh'"
volumes:
- "/opt/deploy/scripts/reset-database.sh:/reset-database.sh:ro"
depends_on:
- pg
- faktory
Expand Down Expand Up @@ -128,7 +131,23 @@ services:
profiles:
- watchtower
- si-watchtower
environment:
- "WATCHTOWER_LIFECYCLE_HOOKS=true"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "${DOCKER_CONFIG:-~/.docker/config}:/config.json:ro"
command: --interval 30 --label-enable
- "${DOCKER_CONFIG:-~/.docker/config.json}:/config.json:ro"
command: --interval 10 --label-enable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


guinea:
image: "index.docker.io/systeminit/guinea:stable"
labels:
- "com.centurylinklabs.watchtower.enable=true"
- "com.centurylinklabs.watchtower.lifecycle.pre-update='/reset-database.sh'"
volumes:
- "/opt/deploy/scripts/reset-database.sh:/reset-database.sh:ro"
profiles:
- guinea
extra_hosts:
- "postgres:${GATEWAY:-I like my butt}"
ports:
- "8080:80"
6 changes: 6 additions & 0 deletions deploy/scripts/reset-database.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#!/usr/bin/env bash
apt update
apt install -y postgresql-client
Comment on lines +2 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where are these commands ran?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They run inside of the container that is going to be affected. Since we're running everything on debians (rust's default docker images use it as a base) this is safe, but it does feel a bit hack-ish

# TODO(victor): At some point we need to start managing the db credentials as secrets
export PGPASSWORD="bugbear"
psql -U si -d si -h postgres -c " DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
20 changes: 13 additions & 7 deletions research/staging_host/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,29 @@

The files in this folder allow you to deploy an EC2
instance that automatically deploy the latest versions
of SI's containers, resetting the env on every update.
of SI's containers, resetting the database on every update.

Right now, it's only bringing up a coreos instance with
SI's containers on startup, but no auto-update via watchtower.
SI's containers on startup, but no auto-update via watchtower at first boot.

It can be started by, while on the folder containing this file,
running:

```
```shell
butane staging-1.yaml --pretty --strict --files-dir ../../ > staging-1.ign
terraform apply -auto-approve
```

for watchtower (a.k.a. auto updates) to work, you need to log in to the server and execute the following to disable
SELinux:

```shell
sudo sed -i -e 's/SELINUX=/SELINUX=disabled #/g' /etc/selinux/config && sudo systemctl reboot
```

The server will reboot and restart all services with auto updates enabled.

The way it's working right now, butane copies the deployment
docker compose files and makefile onto the server,
and executes it. The idea would be to, in the future,
execute each server via its own systemd unit, and have
watchtower setup with a pre update
[lifecycle hook](https://containrrr.dev/watchtower/lifecycle-hooks/)
that wipes all the data whenever sdf or the dal get updated
execute each server via its own systemd unit.
Binary file added research/staging_host/dockersock.pp
Binary file not shown.
12 changes: 12 additions & 0 deletions research/staging_host/dockersock.te
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
module dockersock 1.0;

require {
type docker_var_run_t;
type docker_t;
type svirt_lxc_net_t;
class sock_file write;
class unix_stream_socket connectto;
}

allow svirt_lxc_net_t docker_t:unix_stream_socket connectto;
allow svirt_lxc_net_t docker_var_run_t:sock_file write;
2 changes: 2 additions & 0 deletions research/staging_host/guinea_image/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
FROM nginx:stable
COPY index.html /usr/share/nginx/html/
7 changes: 7 additions & 0 deletions research/staging_host/guinea_image/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
<html lang="en">
<head><title>SI Test Image</title></head>
<body>
<h1>System Initiative Test Image</h1>
<i>V0.0.0</i>
</body>
</html>
26 changes: 26 additions & 0 deletions research/staging_host/selinux.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# This file controls the state of SELinux on the system.
# SELINUX=disabled # can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
# See also:
# https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-selinux/#getting-started-with-selinux-selinux-states-and-modes
#
# NOTE: In earlier Fedora kernel builds, SELINUX=disabled #disabled would also
# fully disable SELinux during boot. If you need a system with SELinux
# fully disabled instead of SELinux running with no policy loaded, you
# need to pass selinux=0 to the kernel command line. You can use grubby
# to persistently set the bootloader to boot with selinux=0:
#
# grubby --update-kernel ALL --args selinux=0
#
# To revert back to SELinux enabled:
#
# grubby --update-kernel ALL --remove-args selinux
#
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
84 changes: 56 additions & 28 deletions research/staging_host/staging-1.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,15 @@ storage:
- path: /etc/hostname
contents:
inline: staging-1
- path: /opt/dockersock.pp
mode: 0755
contents:
local: research/staging_host/dockersock.pp
# We need to disable SELINUX (or make a new policy) so that watchtower can get credentials to dockerhub
# - path: /etc/selinux/config
# mode: 0644
# contents:
# local: research/staging_host/selinux.conf
- path: /usr/local/bin/docker-auth.sh
mode: 0755
contents:
Expand All @@ -22,6 +31,10 @@ storage:
mode: 0755
contents:
local: deploy/scripts/gateway.sh
- path: /opt/deploy/scripts/reset-database.sh
mode: 0755
contents:
local: deploy/scripts/reset-database.sh
- path: /opt/deploy/docker-compose.yml
contents:
local: deploy/docker-compose.yml
Expand All @@ -31,12 +44,27 @@ storage:
- path: /opt/deploy/docker-compose.pganalyze.yml
contents:
local: deploy/docker-compose.pganalyze.yml
- path: /opt/deploy/docker-compose.prod.yml
- path: /opt/deploy/docker-compose.staging.yml
contents:
local: deploy/docker-compose.prod.yml
local: deploy/docker-compose.staging.yml
systemd:
units:
# installing aws-cli as a layered package with rpm-ostree
- name: install-selinux-dockersock-policy.service
enabled: true
contents: |
[Unit]
Description=Install SELINUX Docker sock policy
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt
ExecStart=semodule -i dockersock.pp

[Install]
WantedBy=multi-user.target
- name: layer-awscli.service
enabled: true
contents: |
Expand All @@ -55,6 +83,28 @@ systemd:
RemainAfterExit=yes
ExecStart=/usr/bin/rpm-ostree install --apply-live --allow-inactive --idempotent awscli

[Install]
WantedBy=multi-user.target
# Note(victor): This is not vital but is necessary. I will not be taking questions at this time.
- name: layer-vim.service
enabled: true
contents: |
[Unit]
Description=Install Vim
Wants=network-online.target
After=network-online.target

# We run before `zincati.service` to avoid conflicting rpm-ostree
# transactions. - https://docs.fedoraproject.org/en-US/fedora-coreos/os-extensions/
After=layer-awscli.service
Before=zincati.service


[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/rpm-ostree install --apply-live --allow-inactive --idempotent vim
Comment on lines +88 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jokes aside, there may be a "system packages" step that evolves from the groundwork here. For instance, we will likely eventually want toolbox on here so that we don't have to do this on the host.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very cool! I also feel like butane could make this simpler but the task that track this isn't very active sadly


[Install]
WantedBy=multi-user.target
- name: layer-make.service
Expand All @@ -65,7 +115,7 @@ systemd:
Wants=network-online.target
After=network-online.target

After=layer-awscli.service
After=layer-vim.service
Before=zincati.service


Expand Down Expand Up @@ -148,30 +198,8 @@ systemd:
[Service]
TimeoutStartSec=60s
WorkingDirectory=/opt/deploy
ExecStart=make prod-service
ExecStartPre=-/usr/bin/docker-compose down
ExecStart=make staging

[Install]
WantedBy=multi-user.target
# - name: watchtower.service
# enabled: true
# contents: |
# [Unit]
# After=network-online.target
# Wants=network-online.target
#
# After=deployment.service
# Requires=deployment.service
#
#
# [Service]
# ExecStartPre=-/usr/bin/docker kill whiskers1
# ExecStartPre=-/usr/bin/docker rm whiskers1
# ExecStart=/usr/bin/docker run --name watchtower \
# -v /var/run/docker.sock:/var/run/docker.sock docker.io/containrrr/watchtower \
# -v /root/.docker/config.json:/config.json \
# --interval 30 --label-enable \
# containrrr/watchtower
#
# [Install]
# WantedBy=multi-user.target

2 changes: 1 addition & 1 deletion research/staging_host/staging.tf
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ data "local_file" "ignition" {

resource "aws_instance" "staging-1" {
ami = "ami-0e6f4ffb61e585c76"
instance_type = "t3.medium"
instance_type = "t3.large"
subnet_id = "subnet-07d580fee7a806230"
vpc_security_group_ids = ["sg-0d0be672e4485feb4"]
key_name = "si_key"
Expand Down