Table of Contents generated with DocToc
- Configuration Management of librenet.gr
- Testing
- Deploying new Diaspora versions
- Vagrant
This repo contains various Ansible scripts that help with maintenance of the https://librenet.gr Diaspora pod.
We try to be as abstract as possible, but this mainly targets our own infrastructure.
Distributions supported:
- CentOS 7
For testing purposes, a Vagrantfile is provided which emulates the
deployment on a VM. Check the vagrant/
directory.
It is always a good idea to check before you deploy. Just add the
--check
flag when running a playbook. You can also add --diff
to see
the changed diff.
More info: http://docs.ansible.com/playbooks_checkmode.html
There is an encrypted private vars file which can be unencrypted with the
vault-passwd.txt.gpg
file. Read below on how to use it.
Some sensitive files like private.yml
are encrypted with ansible-vault.
vault-passwd.txt.gpg
is automatically decrypted during playbook run.
That saves us time from running --vault-password-file vault-passwd.txt
each time. The password is decrypted on the fly without the need to decrypt
to plain text file. It is needed when no tags at all are given, or one of the
following is used: private
, diaspora
, config
.
Read more in Using PGP To Encrypt The Ansible Vault and How do I store private data in git for Ansible?.
If you need to add more variables to the encrypted file, you can edit it with:
ansible-vault edit roles/diaspora/vars/private.yml
You can't provide the file the password is stored, so manual copy-paste is needed. Once done, don't forget to push back any changes.
Note: Every time you edit the encrypted file, it appears changed even if you don't make any changes.
For extra security we do not use port 22. There are two ways to provide our custom port.
- When running the playbooks, include
-e "ansible_ssh_port=$SSH_PORT"
- Copy
hosts.example
tohosts
and edit likehostname:port
.
The hosts file is excluded from git so that you don't give away the port.
The tasks are written in a way where sudo is always invoked. When running a playbook, you must ensure that your remote user has sudo privileges to run any command.
If it is a passwordless user, you don't need to pass any flags. If the remote
user requires a password to run the sudo commands, add --ask-sudo-pass
when
calling a playbook.
Inside the playbooks/
directory, there are various playbooks for everyday
administrative usage.
The table below summarizes their purposes.
playbook | description |
---|---|
deploy.yml |
the main playbook which deploys diaspora with its various components |
check_updates.yml |
checks for system updates without updating |
fetch_logs.yml |
fetches logs for inspection |
maintenance.yml |
calls check_updates.yml , system_update.yml and services_restart.yml in that order |
services_restart.yml |
restarts various services |
services_status.yml |
checks status of various services |
system_update.yml |
updates system by running yum update |
The main playbook that is responsible for all the roles.
It deploys a diaspora pod with PostgreSQL on a CentOS 7 server.
Run with:
ansible-playbook -i hosts deploy.yml --vault-password-file vault-passwd.txt
Supported tags:
- diaspora
- config
- unicorn
- sidekiq
- routes
- private
- systemd
- nginx
- yum
- pkg
- epel
- database
- backup
Check if there are any software updates. It does not update the system. After
running this playbook you might want to run system_update.yml
to fully
update your system.
Run with:
ansible-playbook -i hosts playbooks/check_updates.yml
It fetches logs for inspection. Currently included: sidekiq, unicorn.
If you want to add more logs to fetch, edit playbooks/fetch_logs.yml
and
add another entry below the with_items
option.
By default, logs are stored in a downloads/
dir at the root of this repo
which is in .gitignore
.
Source and destination are defined by variables. If you want to store in a different location you have to edit them.
Run with:
ansible-playbook -i hosts playbooks/fetch_logs.yml
Restart main services.
Supported tags:
unicorn
sidekiq
mariadb
diaspora
ssh
For example:
ansible-playbook -i hosts playbooks/restart_services.yml --tags=diaspora
will restart the unicorn
and sidekiq
services, as those two tasks have
the diaspora
tag.
Check the status of various services.
Supported services: nginx
, unicorn
, sidekiq
, postgres
, redis
, prosody
,
camo
.
Supported tags:
- nginx
- diaspora
- unicorn
- sidekiq
- redis
- postgres
- database
- prosody
- camo
Run with:
ansible-playbook -i hosts playbooks/services_status.yml
Perform a system update.
Run with:
ansible-playbook -i hosts playbooks/system_update.yml
You will be prompted whether to continue or not. Possible answers are yes
and no
, with no
being the default. To be sure, first run the
check_updates.yml
to see what updates are available.
Currently has the following playbooks included:
check_updates.yml
system_update.yml
services_restart.yml
Run with:
ansible-playbook -i hosts playbooks/maintenance.yml
Run all roles:
ansible-playbook -i hosts deploy.yml --vault-password-file vault-passwd.txt
Configuration files are located in roles/nginx/templates/
. Once changed, run:
ansible-playbook -i hosts deploy.yml --tags=nginx
If you changed something in diaspora.yml
run the playbook with:
ansible-playbook -i hosts deploy.yml --vault-password-file vault-passwd.txt -t config
If you change/add anything under app/assets
, run the playbook with:
ansible-playbook -i hosts deploy.yml -t assets
It will fetch the new code and run the rake task to precompile the assets.
There is some minimal testing done. You can run ./bin/ansible-test
locally
and it will test the syntax of deploy.yml
and all its tasks. Travis is setup
to also test the syntax.
In order to deploy a new version, our Diaspora fork should first be updated.
Locally, clone our fork and set a remote to Diaspora upstream:
git clone [email protected]:libreops/librenet-ansible.git
git remote add upstream https://github.com/diaspora/diaspora.git
Diaspora's master
branch always points to the latest stable release, so we'll
use that to update our fork.
Our base branch which contains all custom changes, is librenet
. The workflow
to update librenet
with latest master
from upstream is:
-
Checkout the
librenet
branch:git checkout librenet
-
Fetch the upstream master branch:
git fetch upstream master
-
Merge
upstream/master
inlibrenet
:git merge upstream/master
-
Resolve any conflicts, usually in
Gemfile.lock
in which we have added the new_relic gem. -
Add unstaged files:
git add file1 file2
-
Run
git commit
and let Git do its magic -
Push back to our fork:
git push origin librenet
Now it's time to read Diaspora's changelog and check if there are any changes needed to any yaml files or any tasks need to run manually.
To update librenet.gr with the new version, run the playbook:
ansible-playbook deploy.yml -t diaspora -l librenet.gr
For testing purposes, a Vagrantfile is provided, so that a staging environment can be set up.
You must have installed the following software in your local machine:
Note: Before running vagrant you might want to change some of its configuration. See configuration for available settings.
# Clone this repo
git clone https://github.com/librenet/ansible diaspora_ansible
# Change to root directory
cd diaspora_ansible/
# Edit Vagrantfile to match your setup and start vagrant
vagrant up
The first run will take some time as it will first download the base CentOS7 image and then run ansible.
For consecutive runs, run:
vagrant provision
See useful links for more vagrant commands.
Current Vagrantfile assumes certain things. You might want to change them to match your own setup.
ansible.playbook
(string): path to the playbook to run (relative/absolute).
ansible.vault_password_file
(string): path to the vault-passwd.txt
(relative/absolute).
ansible.extra_vars = { sitename: "staging.librenet.local" }
(dictionary):
Diaspora sets the FQDN in the database via diaspora.yml
. Setting this
extra variable in Vagratfile, you can bypass the one set in
roles/diaspora/defaults/main.yml
and roles/nginx/vars/main.yml
, so that
you have a separate FQDN for staging. After successful provisioning, you
can set in your /etc/hosts
:
192.168.33.10 staging.librenet.local
Point your web browser to staging.librenet.local
and test your deployment.
Some useful vagrant links:
- https://docs.vagrantup.com/v2/provisioning/ansible.html
- https://docs.vagrantup.com/v2/cli/index.html
Sometimes things don't go as expected.
Captcha relies on ImageMagick which some time ago had a serious CVE
(CVE-2016–3714).
Updates might change the policies defined in /etc/ImageMagick/policy.xml
, so
we patch it with our own.
Fix it with:
ansible-playbook deploy.yml -t imagemagick -l librenet.gr
You can follow the Diaspora issue.