SwarmCI in it's currently stage is an CI/CD extension. You can extend your existing build system (jenkins, bamboo, teamcity), with parallel, distributing, isolated build tasks leveraging Docker Swarm.
This project inspired me because of the problems I've faced with conventional CI/CD platforms like Jenkins, Bamboo, Teamcity.
- Agents were java/other applications running on a VM with no isolation between the build and the agent, sometimes causing hard to reproduce issues.
- Agent machines needed to be customized with differing capabilities (SDKs, versions, libraries, resources like cpu/mem, etc). This is complex, usually requiring your OPS team to setup, maintain.
- Binding builds to specific agents which have the required capabilities is wasteful, as you must wait for an idle agent with your requirements before your build would run.
- An agent is no longer "untouched" after the first build it runs. State changes between builds can cause unexpected failures or worse, false-successes (unless the machine was reprovisioned after every build, not an easy or cheap thing to do).
- Build agents require licensing which can be very expensive (in addition to the hardware).
- Build agents are often underutilized. Either idle, or running builds that have many steps and can take awhile, while not fully utilizing all CPU/Mem/IO/Network resources.
SwarmCI is CI extension as well as (future) a stand-alone CI/CD platform. You can use SwarmCI to extend an existing CI system (Bamboo, TeamCity, Jenkins, etc) with a few steps:
- Setup a Docker Swarm.
- Converting existing build tasks, stages, and jobs to a single
.swarmci
file. - Configure a single task in your build to run a single command to delegate work to your Docker Swarm:
python -m swarmci
A .swarmci
file consists of several layers.
Stages
are run sequentially (identified with unique names). Subsequent stages only start if all jobs from a prior stage complete successfully.Jobs
run in parallel (identified with unique names). Each job consists of one or more tasks, and various bits of meta data.Tasks
are run sequentially within a job on a common container.
Each job consists of several pieces of information:
image(s)
(required): the image to be used for all tasks within this job. This image should be on an available registry for the swarm to pull from (or be built using thebuild
task). It should not have an entrypoint, as we'll want to execute an infinite sleep shell command so that it does not exit, because all tasks will run on this container, and SwarmCI expects to be able to launch the container, leave it running, and exec tasks on the running container.This can be either a string or a list. When in list form, this job will be converted to a job matrix.env
(optional): environment variables to be made available fortasks
,after_failure
, andfinally_task
. This can be dictionary or a list of dictionaries.When in list form, this job will be converted to a job matrix.build
(optional): Similar to the docker compose build. The SwarmCI agent can build and run the docker image locally before running tasks. The name of the built image will be that of theimage
key within the job.task(s)
(required): This can be either a string or a list. If any task in the list fails, subsequent tasks will not be run, however,after_failure
andfinally
will run if defined.after_failure
(optional): this runs if any task task fails. This can be either a string or a list.finally_task
(optional): this runs regardless of result of prior tasks. This can be either a string or a list.
Full Example:
stages:
- my_stage:
- name: my_job
image: python:alpine
env:
say_something: hello from
tasks:
- /bin/sh -c 'echo "$say_something $HOSTNAME"'
- /bin/sh -c 'echo "second task within my_job in $HOSTNAME"'
after_failure: /bin/echo "this runs if any script task fails"
finally_task: /bin/echo "this runs regardless of the result of the script tasks"
- name: another_job
image: python:alpine
tasks:
- /bin/sh -c 'echo "another_job says hello from $HOSTNAME"'
- second_stage:
- name: default_job_in_second_stage
image: python:alpine
tasks:
- /bin/sh -c 'echo "look my, second stage job running in $HOSTNAME"'
When a job is converted to a job-matrix, you get all possible combinations of image
and env
variables. Here is an example job matrix that expands to 6 individual (3 * 2) jobs.
bar-job:
image:
- my-ci-python:2.7
- my-ci-python:3.2
- my-ci-python:3.5
env:
- db: mysql
foo: v1
- db: mysql
foo: v2
# note: all tasks will run for each expanded job instance
vagrant up
vagrant ssh manager
git clone https://github.com/ghostsquad/swarmci.git
cd swarmci
python3 setup.py install --force
python3 -m swarmci --demo
python3.5 runtox.py -e linting,py35,py36
or using docker
docker build -t swarmci .
docker build -f Dockerfile.test -t swarmci:test .
docker run -it swarmci:test