Update: For the foreseeable future, this repository will be archived. Will consider revival when infrastructure returns to deployment using Ansible and Terraform.
This repository includes all the files necessary to describe and deploy the technology stack of ACM @ UCSD. You will find files such as:
- Ansible roles and playbooks
- Terraform files
- Custom patches and images for certain services
ACM @ UCSD’s tech stack is complex enough to warrant documentation. This repository, as well as this document, provide a description so as to test and deploy the technology stack.
Deployment and provisioning of instances supported by Ansible and Terraform.
Update any changes for the infrastructure, configuration files or environment variables. Start the instances:
terraform apply
Afterwards, add your private SSH key to each of the hosts and run this:
ansible-galaxy install -r requirements.yml
ansible-playbook -i production stack.yml
The tech stack has the following main services:
- Membership Portal UI & API
- ACMURL
- BreadBot
- Password Manager
Services will be deployed on separate instances of reasonable size for each one.
We will begin by provisioning instances for each service, and then we’ll proceed with configuring the domains for each of them.
All of the services above most likely require a maximum of 1 GB of RAM.[fn:current-stack-requirements]
In order to maintain uptime for all components of the stack, each service will run on a separate instance of appropriate size.
Additionally, each service will be provisioned with separate Ansible playbooks.
We’ll begin with the standard configuration for most AWS instances; a SSH keypair and a security group allowing trivial access to HTTP, HTTPS and SSH. I’ve included my personal public key (@Storm_FireFox1) for the purposes of demonstration.
provider "aws" {
region = "us-west-1"
}
resource "aws_key_pair" "stormfirefox1" {
key_name = "stormfirefox1-key"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCj25h71sryk/5epo3yObh8HspR6ODMyaF/TSSC1SpIB2f9mPUKeCA8k32mrNpRCtVVwAhCn9OoVLfbfs4KzPNrxVByexqH6FORXY75mRUVdqF2qv9WDk7lANiTXV12+b9sMhtbGsU7r+CXGMCN+QZvnvLFb9xadKxNhJIyy+yQFu931uKl+GcGgePs4yj8uRLQ+o1yNU1WxzZgGHbBRAv/B98RCWoJTd+8RHfT/OxQL0PKx/LdHae/ium99xEJQ5DrbtMI+9BEM4vKN0FMCuVjPQ8sC8tIxzyqWZYyCIkiZmoH7TKGvD1Yg9jI9RxcuXB5793seiORAw9pH/U5w/8JKie5eGWXgrJkHMqqrvNGrw6TEk2gfaBllm8QSfS8O97xs6/BC1ENXUZ9R4rEGKpW9+pzHx/FTyD6FMMRstRcPjPCtU6F15g6XYAAjpbvxn4RiLT1Rk3mbJtM2xTAIUjYSfukGKzAs5gRp3TSNgdN+nsmcbJRMsA8FXubogDtYUO3VbpjsALA/bBHtjKziBVhbnoQ6PrXFrFaojeSq4Sf8iOOXK29Xfv/TSc+potmfw4wQMwSXJgOBFFhv9Xa/LwGf+1KpHpsfHO4xJ2yLpTUoiryvsbDsNk0+JbkMUgfNrJ+W01Lwf4qMKXbBOtNFf8gkTnj2Af3N6OCFa8XdgRO+Q== STORM_GPG_2FB5275E"
}
resource "aws_security_group" "allow_https_ssh" {
name = "allow_https_ssh"
description = "Allow HTTPS/SSH traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
We’ll continue with the easiest instances to plan, which are those for small services; BreadBot and Password Manager.
# Ubuntu 20.04 LTS instance, for good measure
# Find other AMI's at https://cloud-images.ubuntu.com/locator/ec2/
resource "aws_instance" "breadbot" {
ami = "ami-021809d9177640a20"
instance_type = "t3a.nano"
key_name = aws_key_pair.stormfirefox1.key_name
security_groups = [aws_security_group.allow_https_ssh.name]
tags = {
Name = "BreadBot"
}
}
resource "aws_instance" "pass" {
ami = "ami-021809d9177640a20"
instance_type = "t3a.nano"
key_name = aws_key_pair.stormfirefox1.key_name
security_groups = [aws_security_group.allow_https_ssh.name]
tags = {
Name = "Password Manager"
}
}
Additionally, we may in the future, move Minecraft to AWS; in which case we’ll need to provision that as well in the future. We’ll put it in here for prosperity.
# RETIRED TEMPORARILY
# resource "aws_instance" "minecraft" {
# ami = "ami-021809d9177640a20"
# instance_type = "t3a.medium"
# key_name = aws_key_pair.stormfirefox1.key_name
# security_groups = [aws_security_group.allow_https_ssh.name]
# tags {
# Name = "Minecraft Server"
# }
# }
We will also include instances for the membership portal; we will need one for the API, and we’ll use AWS’ RDS in order to host the PostgreSQL database:
# ON STANBY ON HEROKU, WIP
# resource "aws_instance" "membership-portal" {
# ami = "ami-0a63cd87767e10ed4"
# instance_type = "t3a.micro"
# tags {
# Name = "Membership Portal API"
# }
# }
#
# resource "aws_db_instance" "membership-portal-db" {
# allocated_storage = 10
# engine = "postgres"
# instance_class = "db.t3.micro"
# name = var.dbName
# username = var.dbUser
# password = var.dbPass
# }
Lastly, we’ll include the S3 buckets used by the Membership Portal:
resource "aws_s3_bucket" "acmucsd" {
bucket = "acmucsd"
acl = "private"
}
resource "aws_s3_bucket" "acmucsd-membership-portal" {
bucket = "acmucsd-membership-portal"
acl = "private"
}
Note the above variables used originate using the provided .env
file. Edit
its contents with values of your choice. The playbooks will also use them
to properly deploy .env
files.
Using the AWS Calculator, we obtain the cost for all the instances per year:
Instance | Instance Type | Cost Per Month |
---|---|---|
BreadBot | t3a.nano | 5.29 |
Pass | t3a.nano | 5.29 |
Portal API | t3a.micro | 9.38 |
Portal Database | db.t3.micro | 16.55 |
Total Cost / Month: | 36.51 | |
Total Cost / Year: | 438.12 |
Assuming all goes well, running Terraform will deploy the stack.
terraform init && terraform apply
We will now begin deploying the software for each service. We’ll document the SSH commands used in tandem with the respective Ansible task and role.
[fn:current-stack-requirements] Funnily enough, this is hard to quantify properly. Pass and BreadBot both occupied the GCP f1.micro
, which has 0.6 GB of RAM, so maybe a bit more is useful. The API, however, is up for discussion, considering Heroku’s less obvious nature with system requirements.
We’ll need to configure the domain for each single service, along with a few other
manual record pointers for domains. We will use the acmucsd.com
domain, as that
is where most of our tech stack is linked to:
resource "aws_route53_zone" "acmucsd-com-public" {
name = "acmucsd.com"
comment = "ACM's main domain for stuff."
tags = {
}
}
resource "aws_route53_zone" "acmurl-com-public" {
name = "acmurl.com"
comment = "ACM's domain used for URL shortening."
tags = {
}
}
We’ll add the standard records for domains in the mix, including the root A record, NS records and SOA records:
resource "aws_route53_record" "acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "acmucsd.com"
type = "A"
records = ["104.198.14.52"]
ttl = "3600"
}
resource "aws_route53_record" "www-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "www.acmucsd.com"
type = "A"
records = ["104.198.14.52"]
ttl = "3600"
}
resource "aws_route53_record" "acmucsd-com-NS" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "acmucsd.com"
type = "NS"
records = ["ns-2024.awsdns-61.co.uk.", "ns-278.awsdns-34.com.", "ns-1200.awsdns-22.org.", "ns-591.awsdns-09.net."]
ttl = "172800"
}
resource "aws_route53_record" "acmucsd-com-SOA" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "acmucsd.com"
type = "SOA"
records = ["ns-2024.awsdns-61.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"]
ttl = "900"
}
For the root domain, we’ll also add the necessary CNAME records for pointing to Sendgrid’s DKIM TXT records:
resource "aws_route53_record" "s1-_domainkey-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "s1._domainkey.acmucsd.com"
type = "CNAME"
records = ["s1.domainkey.u17821998.wl249.sendgrid.net"]
ttl = "3600"
}
resource "aws_route53_record" "s2-_domainkey-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "s2._domainkey.acmucsd.com"
type = "CNAME"
records = ["s2.domainkey.u17821998.wl249.sendgrid.net"]
ttl = "3600"
}
The domains added here are, in reality, ACM AI’s domains for their own infrastructure. We’ll leave the records here to be explained by them:
resource "aws_route53_record" "ai-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "ai.acmucsd.com"
type = "CNAME"
records = ["acmucsd-ai.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "sendgrid-ai-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "17821998.ai.acmucsd.com"
type = "CNAME"
records = ["sendgrid.net"]
ttl = "3600"
}
resource "aws_route53_record" "api-ai-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "api.ai.acmucsd.com"
type = "A"
records = ["104.155.168.98"]
ttl = "3600"
}
resource "aws_route53_record" "open-ai-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "open.ai.acmucsd.com"
type = "CNAME"
records = ["openai-acm-ai.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "compete-ai-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "compete.ai.acmucsd.com"
type = "A"
records = ["34.120.177.157"]
ttl = "3600"
}
resource "aws_route53_record" "apitest-ai-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "apitest.ai.acmucsd.com"
type = "A"
records = ["35.208.23.117"]
ttl = "3600"
}
resource "aws_route53_record" "em4616-ai-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "em4616.ai.acmucsd.com"
type = "CNAME"
records = ["u17821998.wl249.sendgrid.net"]
ttl = "3600"
}
resource "aws_route53_record" "url4522-ai-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "url4522.ai.acmucsd.com"
type = "CNAME"
records = ["sendgrid.net"]
ttl = "3600"
}
The Membership Portal is continuing to be hosted on Heroku, so we’ll need CNAMEs pointing to Heroku’s DNS services for the API:
resource "aws_route53_record" "api-test-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "api-test.acmucsd.com"
type = "CNAME"
records = ["acmucsd-portal-testing.herokuapp.com"]
ttl = "3600"
}
resource "aws_route53_record" "api-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "api.acmucsd.com"
type = "CNAME"
records = ["shallow-koi-v9n1nho6ee48b480cn08m1hr.herokudns.com"]
ttl = "3600"
}
Up next will come an influx of CNAMEs pointing to Netlify apps, where most React sites are hosted by ACM:
resource "aws_route53_record" "design-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "design.acmucsd.com"
type = "CNAME"
records = ["acmdesign.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "hack-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "hack.acmucsd.com"
type = "CNAME"
records = ["acmhack-site.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "members-test-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "members-test.acmucsd.com"
type = "CNAME"
records = ["members-nightly.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "members-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "members.acmucsd.com"
type = "CNAME"
records = ["acmucsd.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "space2020-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "space2020.acmucsd.com"
type = "CNAME"
records = ["space2020.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "splash-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "splash.acmucsd.com"
type = "CNAME"
records = ["acmucsd-purple.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "static-template-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "static-template.acmucsd.com"
type = "CNAME"
records = ["acm-static.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "tree-acmucsd-com-CNAME" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "tree.acmucsd.com"
type = "CNAME"
records = ["acmucsd-tree.netlify.app"]
ttl = "3600"
}
resource "aws_route53_record" "vote-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "vote.acmucsd.com"
type = "A"
records = ["217.156.97.70"]
ttl = "3600"
}
Each instance from the ones we provisioned has their own domain, so let’s assign A records for all
the instances we just deployed. We’ll set two domains for ACMURL: acmurl.com
and url.acmucsd.com
:
resource "aws_route53_record" "bot-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "bot.acmucsd.com"
type = "A"
records = [aws_instance.breadbot.public_ip]
ttl = "3600"
}
resource "aws_route53_record" "acmurl-com-A" {
zone_id = aws_route53_zone.acmurl-com-public.zone_id
name = "acmurl.com"
type = "A"
records = [aws_instance.breadbot.public_ip]
ttl = "3600"
}
resource "aws_route53_record" "url-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "url.acmucsd.com"
type = "A"
records = [aws_instance.breadbot.public_ip]
ttl = "3600"
}
resource "aws_route53_record" "pass-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "pass.acmucsd.com"
type = "A"
records = [aws_instance.pass.public_ip]
ttl = "3600"
}
resource "aws_route53_record" "mc-acmucsd-com-A" {
zone_id = aws_route53_zone.acmucsd-com-public.zone_id
name = "mc.acmucsd.com"
type = "A"
records = ["51.81.26.152"]
ttl = "3600"
}
For debugging purposes, the tech-stack
repo provides
a Vagrantfile to deploy test instances locally. You can boot each of them by
installing Vagrant and VirtualBox; afterwards, you run this command in the terminal:
vagrant up
For the purposes of making this documentation easier to follow, we’ll assume that you are provisioning instances to configure using either the provided Terraform file or the provided Vagrantfile.
tech-stack
provides two Ansible inventory files in order to ease its use;
testing
and production
; testing
points to the local Ansible boxes, whereas
production
points to the Terraform-provisioned instances; you may use either
one by including it in each Ansible playbook command:
ansible-playbook -i <inventory>
It is also recommended you set all of the environment variables in every file for
each service you wish to configure. There are .env.example
files provided for
each service; all that is necessary is to set the variables for each instance
by copying each of the files and then adding the secrets:
cp .env.example .env
A self-hosted Vault instance would fix the problem of hosting secrets like these, but that’s too complex for this setup.
The password manager is, in essence, a bitwarden_rs
instance deployed on top
of a Bitwarden image. We will use the pass
Ansible host for all the following
commands once we start building the playbook.
The easiest way, by far, to install bitwarden_rs
is to use the Docker Compose tutorial
provided by the bitwarden_rs wiki.
First, configure the remote machine to be able to connect to it using your SSH
keys. Either create a new one or provide the public key. If developing using a
Vagrant box, use this command to import the pass
settings to your
configuration:
vagrant ssh-config --host pass >> $HOME/.ssh/config
You can also set these parameters manually in your SSH configuration file for the Terraform-provisioned instances.[fn:ssh-terraform]
First, we’ll want to install Docker by following the installation guide for Ubuntu:
sudo apt-get update
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
Similarly for Docker Compose, using this installation guide:
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version
Second, we’ll need to copy over the Docker Compose file for the pass
instance, provided
in the repo, along with the environment variables for it:
scp ./roles/pass/files/docker-compose.yml pass:~/
scp ./roles/pass/files/.env pass:~/
scp ./roles/pass/files/Caddyfile pass:~/
Afterwards, we’ll want to log into the pass instance and create the directory used
by bitwarden_rs
in the same directory.
mkdir ~/bw-data
Afterwards, all we have to do is spin up the Docker Compose service.
sudo docker-compose up -d
The Ansible role pass
covers the installation process of the password manager
for a brand new Ubuntu instance. You will also need to deploy the gerlingguy.docker
role for the instance.
Run the pass.yml
playbook to configure the password manager:
ansible-playbook -i production pass.yml
Up next, we will configure BreadBot.
[fn:ssh-terraform] In reality, this is not probably necessary; AWS/Azure will likely configure SSH keys for likely accounts in the instances, but that will require some Terraform configuration; good task for later.
BreadBot is the Discord bot for ACM, and is a Node.js bot running on a light instance. Installation, while time-consuming, is not difficult at all.
We will use the bot
Ansible host for all the following commands once we start
building the playbook.
Configure the remote machine to be able to connect to it using your SSH
keys. Either create a new one or provide the public key. If developing using a
Vagrant box, use this command to import the bot
instance settings to your
configuration:
vagrant ssh-config --host bot >> $HOME/.ssh/config
First, we will install all the required dependencies: git, npm and Node:
sudo apt-get install -y curl git
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt-get install -y nodejs gcc g++ make
Second, we will clone the Git repo for BreadBot in the home directory:
We’ll also assume the main user to run BreadBot is saved in the environment variable
BREADBOT_USER
on the remote shell:
sudo mkdir -p /opt
git clone https://github.com/acmucsd/discord-bot /opt/discord-bot
sudo chown -R $BREADBOT_USER:$BREADBOT_USER /opt/discord-bot
Afterwards, we will copy over the set-up environment variables using the provided .env file:
scp ./files/breadbot/.env bot:/opt/discord-bot/
Finally, we will install all the required Node dependencies:
npm install
Additionally, we would like to ensure the Discord bot remains running even when our shell dies.
The easiest way to manage this (considering Ansible will also manipulate our instances) is using
a Systemd service unit. These are not that hard to configure, and we really just need a basic service
to import the environment variable and run the npm
task for BreadBot. If you want to peruse the file,
you can find it at files/breadbot/service.conf
but in all essence, the important part is:
[Service]
EnvironmentFile=/opt/discord-bot/.env
ExecStart=/usr/bin/npm run start
ExecStop=/usr/bin/pkill -f npm
WorkingDirectory=/opt/discord-bot
We’ll copy this service unit file over to the server and install it under the system services. Run this on your machine:
scp ./files/breadbot/breadbot.service bot:/opt/discord-bot
Afterwards, run this on the bot
instance:
sudo mv /opt/discord-bot/breadbot.service /etc/systemd/system
sudo chown root:root /etc/systemd/system/breadbot.service
Now, we simply have to reload the Systemd daemon and start the service. From here on out, Systemd is responsible to manage the BreadBot task:
sudo systemctl daemon-reload
sudo systemctl start breadbot.service
The Ansible role breadbot
covers the installation process of BreadBot
for a brand new Ubuntu instance. You will also need to deploy the gerlingguy.nodejs
role for the instance.
Run the bot.yml
playbook to configure the password manager:
ansible-playbook -i production bot.yml
Minecraft is a tricky instance to configure, primarily because of the backup
functionality, thanks to its location within GDrive. rclone
makes
this functionality easier to configure, though.
Remember to configure the remote machine to be able to connect to it using your
SSH keys. Either create a new one or provide the public key. If developing using
a Vagrant box, use this command to import the mc
instance settings to your
configuration:
vagrant ssh-config --host mc >> $HOME/.ssh/config
Fortunately, the bulk of configuration for the Minecraft server is already written by Gideon Tong over at his GitHub repo. Although not 100% complete, it’s good enough of a starter and is already imported to this repository. Additionally to the provided GitHub repo, however, we have included:
- the Gdrive Rclone configuration to allow mounting the backup location
- the service files for running the server
- the scripts to load the Overviewer world rendering on the browser
- the world restore functionality using an additional Ansible playbook
The first, most important thing is to obtain a Gdrive service account for the
address [email protected]
, which is where the Minecraft server hosts its
backup. While there are many tutorials out there on how to do that, obtaining
the service account credentials JSON is up to you. These credentials allow easy
and safe mounting of the Gdrive on the backup instance, which is where we’ll do
backups.
Once you obtain the Google service account JSON file, you should add it to the
Minecraft files directory as files/mc/credentials.json
so that the Ansible
playbook can properly extract it (DO NOT CHECK INTO VERSION CONTROL)
As for the Rclone configuration file that is included with the mc
directory,
the Google console side is documented enough by following the rclone config
command interactive prompts on the server, so it’s best to run that remotely and
get a new configuration file, should default credentials and others items
change. Otherwise, if nothing’s changed, you may use the checked in version of
the configuration file.
We will first create the user minecraft
:
sudo useradd -m minecraft
Then, we’ll create the configuration directory on the server:
sudo mkdir -p /opt/backup
sudo chown -R minecraft:minecraft /opt/backup
Afterwards, we’ll transfer the configuration files there. Run this on your local machine:
scp ./roles/mc/files/rclone.conf mc:~/
scp ./roles/mc/files/gdrive.service mc:~/
scp ./roles/mc/files/credentials.json mc:~/
scp ./roles/mc/files/minecraft.service mc:~/
Then run this on the server instance:
sudo mkdir -p /opt/backup
sudo chown -R minecraft:minecraft /opt/backup
sudo mv ~/credentials.json /opt/backup
sudo mv ~/rclone.conf /opt/backup
sudo mv ~/gdrive.service /etc/systemd/system
sudo mv ~/minecraft.service /etc/systemd/system
Now, we will need to install two programs; rclone
and borg
, used for the
backup functionality. We’ll also need to install Java for running the Minecraft
server, and We’ll also install Minecraft Overviewer, necessary for rendering the
map browser viewer of the Minecraft map. This process requires adding an APT
repository to the machine as well. Lastly, we also want Caddy, a webserver to
display the Minecraft browser map we’ll generate later:
echo 'deb https://overviewer.org/debian ./' \
| sudo tee -a /etc/apt/sources.list.d/overviewer.list
echo "deb [trusted=yes] https://apt.fury.io/caddy/ /" \
| sudo tee -a /etc/apt/sources.list.d/caddy-fury.list
wget -O - https://overviewer.org/debian/overviewer.gpg.asc | sudo apt-key add - # key for Overviewer
sudo apt-get update
curl https://rclone.org/install.sh | sudo bash
sudo apt-get install -y default-jdk minecraft-overviewer caddy
The latest borg
binary has a more involved installation process, since borg
has an older version in the Ubuntu repositories. We need a newer version so we
can easily extract the latest backup from the Borg backup repository whenever
we initialize a new Minecraft instance:
curl -s https://api.github.com/repos/borgbackup/borg/releases/latest \
| grep browser_download_url \
| grep linux64 \
| cut -d '"' -f 4 | head -n 1 \
| wget -qi - -O borg-linux64
sudo cp borg-linux64 /usr/local/bin/borg
sudo chown root:root /usr/local/bin/borg
sudo chmod 755 /usr/local/bin/borg
Now we will need to construct the Rclone mount for the backup functionality. Fortunately, the service unit file required for the mount was already written by Gideon, so we’ll just have to copy the service file to the remote machine and start and enable the service:
sudo mkdir -p /mnt/gdrive
sudo chown -R minecraft:minecraft /mnt/gdrive
sudo systemctl daemon-reload
sudo systemctl enable gdrive
sudo systemctl start gdrive
We will now extract the latest backup from the Borg backup repository. This will take a while, so sit back:
cd /
export BORG_PASSPHRASE='jyPr5QToT&Wca6hfrvtZA5'
export LAST_BORG_BACKUP=$(borg list --last 1 /mnt/gdrive/backup | cut -d ' ' -f1)
borg extract /mnt/gdrive/backup::$LAST_BORG_BACKUP
Now that we’re done with restoring the world, as well as the other necessary
plugins and configuration files, we’ll install the Paper Minecraft server JAR
directly from the Paper API. We can download the file in the respective
minecraft
directory:
wget https://papermc.io/api/v1/paper/1.15.2/latest/download -O /opt/minecraft/paper.jar
sudo chown minecraft:minecraft /opt/minecraft/paper.jar
Next, we’ll render the Minecraft map using Overviewer and also provide the proper Caddyfile directives to Caddy, in order to serve the HTML map generated by Overviewer when done:
overviewer.py /opt/minecraft/world /opt/minecraft/map
cat <<EOF | sudo tee -a /etc/caddy/Caddyfile
mc.acmucsd.com {
root * /opt/minecraft/map
file_server
}
EOF
sudo systemctl restart caddy
We’ll add the crontab entry to run the nightly maintenance script for the Minecraft server:
cat <<EOF | crontab -
0 5 * * * /opt/minecraft/nightly.sh
EOF
And finally, after many commands, you may now start the Minecraft server:
sudo systemctl start minecraft
The Ansible role mc
covers the installation process of the Minecraft server
for a brand new Ubuntu instance. I recommend you run the playbook, though, as this
server requires additional roles.
Run the bot.yml
playbook to configure the password manager:
ansible-playbook -i production mc.yml
Up next, we will configure ACM Live.
ACM AI has a suborg-specific API used to fulfill their own needs for member outreach, such as weekly newsletters and more. Primarily, it’s a Docker image served behind a HTTPS reverse proxy. This setup is likely to be similar to the Password Manager.
First, configure the remote machine to be able to connect to it using your SSH
keys. Either create a new one or provide the public key. If developing using a
Vagrant box, use this command to import the acm-ai-api
settings to your
configuration:
vagrant ssh-config --host acm-ai-api >> $HOME/.ssh/config
First, we’ll want to install Docker by following the installation guide for Ubuntu:
sudo apt-get update
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
Similarly for Docker Compose, using this installation guide:
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version
Fortunately, we are gracefully provided a Docker image that constantly follows the latest version on Docker Hub, which fortunately allows for an easy run:
docker pull acmucsd/acm-ai-api:latest
docker run -d --name "ACM AI API" -p 9000:9000 acmucsd/acm-ai-api:latest
We’ll also need a Caddy instance in front of this Docker container to provide SSL. We’ll install Caddy first:
echo "deb [trusted=yes] https://apt.fury.io/caddy/ /" \
| sudo tee -a /etc/apt/sources.list.d/caddy-fury.list
sudo apt-get update
sudo apt-get install -y caddy
We’ll save the Caddyfile and then run the service:
cat <<EOF | sudo tee -a /etc/caddy/Caddyfile
api.ai.acmucsd.com {
reverse_proxy 127.0.0.1:9000
}
EOF
sudo systemctl restart caddy
Remember to configure the remote machine to be able to connect to it using your
SSH keys. Either create a new one or provide the public key. If developing using
a Vagrant box, use this command to import the live
SSH settings to your
configuration:
vagrant ssh-config --host live >> $HOME/.ssh/config
Most, if not all, command blocks following this line are run by root
, so
become root
before continuing.
We will begin by updating the Ubuntu instance entirely:
apt update && apt upgrade -y
Afterwards, we’ll install BigBlueButton using their install script. We’ll also remove the demo. This will take a while:
wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh \
| bash -s -- -v xenial-220 -s live.acmucsd.com -e [email protected] -g
apt-get purge bbb-demo
Now that it BBB is installed, we’ll need to update the welcome message for ACM Live streams: Add these properties to the BBB properties file:
cat <<EOF >> /usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties
defaultWelcomeMessage=Welcome to <b>%%CONFNAME%%</b>!<br><br>If you haven't already, please sign in at ACM's <a href="https://members.acmucsd.com/"><u>Membership Portal</u></a>.
defaultWelcomeMessageFooter=Thank you for using <strong>ACM Live</strong>.
EOF
We will also need to configure HTTPS redirecting to BBB in the Nginx configuration file provided by BBB:
# Change to HTTPS
sudo sed -i 's/80/443 ssl/g' /etc/nginx/sites-available/bigbluebutton
# Add heredoc to beginning of file
sudo cat <<EOF /etc/nginx/sites-available/bigbluebutton > /etc/nginx/sites-available/bigbluebutton
server {
listen 80;
listen [::]:80;
server_name live.acmucsd.com;
return 301 https://$server_name$request_uri;
}
EOF
A backup functionality for ACM Live is also required to be coded, which requires
rclone
installed again, as well as another Google service account credentials JSON.
Follow the steps from the “Minecraft” section to obtain another set of credentials.
Now BBB is fully configured, we can begin configuring its frontend, Greenlight. We’ll begin by creating an admin account. You may set the user password using environment variables:
export PASSWORD= # insert password for admin account here
docker exec greenlight-v2 bundle exec rake user:create["ACM Live","[email protected]","$PASSWORD","admin"]
Once the Admin account is created, log in with it at live.acmucsd.com
and get to the admin panel.
There you should:
- Change “regular color” to
#fc3b7d
- Set branding image to the file located at
files/live/acmlive.png
- Set registration policy to “Join by Invitation”
You may now also invite users at your leisure using the admin account or create
accounts using the above docker exec
command. Administrator panel is recommended, however.