Skip to content

Commit

Permalink
Cloud-Control becomes Mission_Control!
Browse files Browse the repository at this point in the history
  * New install menu.
  * Rundeck can now be installed by Misson_Control.
  * Docker support. (docker-control project)
  * A chef-server container can now be installed. Accessible via HTTPS/4443.
  * All the variables can be edited in the "vars" file.
  * See CHANGELOG for more info.
  • Loading branch information
c-buisson committed Nov 13, 2014
1 parent a544ace commit 8e71b16
Show file tree
Hide file tree
Showing 33 changed files with 867 additions and 283 deletions.
38 changes: 31 additions & 7 deletions CHANGELOG
Original file line number Diff line number Diff line change
@@ -1,6 +1,31 @@
CHANGELOG
=========

2.0: 2014-11
---------------
Cloud-Control becomes Mission_Control!

New Features:
- New install menu.
- Rundeck can now be installed by Misson_Control.
- Docker support. (docker-control project)
- A chef-server container can now be installed. Accessible via HTTPS/4443.
- All the variables can be edited in the "vars" file.

Updates:
- The Rundeck project "cloud-control" got renamed to "kvm-control".
- Added new DB field "chef_installed".
- Ditched "file" backend.
- Floating ips for KVM guests can now be added separately.
- Every projects will be separated from each other in #{data_folder}.
- Smarter way to detect if Rundeck is running or not while installing Mission_Control.
- All gems will be install with `bundle install`.
- MySQL database name can now be changed via a variable.
- The KVM table name can now be changed via a variable.
- Installer won't download the same source image several times if previously downloaded.
- The ssh_keys variable can now be edited in the "vars" file.
- The userdata templates can now be generated with more than one ssh-key.

1.4: 2014-10-11
---------------
Updates:
Expand Down Expand Up @@ -45,19 +70,18 @@ New feature:
- Upload "ready to use" Rundeck Jobs automatically.

Updates:
- Ditched RVM, using ruby package instead.
- Rename "rundeck_scripts" directory to "lib".
- Ditched RVM, using ruby1.9 package instead.
- Renamed "rundeck_scripts" directory to "lib".
- The setup script will now create /srv/cloud-control and generate/copy all the files and directories needed.
- Moved all the Ruby script to actual Rundeck Jobs.
- Moved all the Ruby scripts to actual Rundeck Jobs.
- Added "data_folder" variable to the setup script.
- Removed unneeded HOME variable from ENV file.
- Removed unneeded curb gem from Gemfile.
- get_images.rb is now using the curb gem instead of a wget system call.
- RUNDECK_SCRIPTS variable got renamed to LIB.
- Generate Rundeck Jobs XMLs based on user environment.
- build-essential added to packages list.
- Add rundeck user to libvirts and kvm Unix groups.
- The setup script will now fetch the current Trusty 14.04 LTS version from: https://cloud-images.ubuntu.com
- Installer will add the rundeck user to libvirtd and kvm Unix groups.
- The install script will now wget the current cloud-image of Ubuntu Trusty 14.04 LTS from: https://cloud-images.ubuntu.com
- Added .gitignore file.
- Added CHANGELOG file.

Expand All @@ -68,7 +92,7 @@ New features:
- New vnc_port field added to backend DBs.

Updates:
- bzr package no needed anymore.
- bzr package not needed anymore.
- cloud-utils package added.
- Removed CLOUD_UTILS variable.
- source.include? "origin" to "img".
Expand Down
79 changes: 47 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,48 +1,43 @@
Description
-----------
#Mission_Control

Cloud-Control is a Rundeck/KVM project that lets you create/start/shutdown/destroy virtual machines. You can choose to start a new virtual machine with an ISO or an Ubuntu Cloud image.
##Description

Requirements
============
Mission_Control is a set of Rundeck projects that lets you `create / start / shutdown / destroy` virtual machines and containers.

Rundeck should be installed.
##Installation

The hypervisor should have `Virtual Technology` enabled. You can test this prior the installation by running:
You will need to execute the `install` script to install all the required packages, gems and lay down the configuration files.

ubuntu@cbuisson:~$ egrep -c '(vmx|svm)' /proc/cpuinfo
#Anything but 0 is good.
./install

And after the installation:
**Variables:** You can edit the `vars` file to reflect your current environment or what you want. You can paste your SSH public keys here.

ubuntu@cbuisson:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
The install process will display a menu where you can choose to install any feature that you want.

##Environment

Installation
============
Mission_Control has been developed for **Ubuntu Trusty 14.04 LTS**.

You will need to execute the `install` script to install all the required packages and gems.
Cloud-Control is using RVM.
#*kvm-control*

./install
You can choose to start a new virtual machine with an ISO or an Ubuntu Cloud image.

Environment
-----------
When an Ubuntu Cloud image is used to launch a new instance, the vm will get a static IP. ISO's on the other hand, will get a DHCP ip.

Cloud-Control has been developed for **Ubuntu Trusty 14.04 LTS**.
###Network types:
*Netmask*: Mission_Control is setup to work on with Class C IPs, therefore the netmask is hard coded to: 255.255.255.0

**Floating IPs**:
####Floating IPs

You will need to edit the `install` file and add:
You will need to edit the `vars` file and add:

- A backend type (*flat file, MySQL or PostgreSQL*)
- The interface out (**Must be br0** if using floating static IPs!)
- A backend type (*MySQL or PostgreSQL*)
- Start IP (*i.e 192.168.0.1*)
- End IP (*i.e 192.168.0.100*)
- Gateway IP (*i.e 192.168.0.254*)

Cloud-Control will assign floating IPs to the KVM guests. Those floating IPs should be able to reach the hypervisor's IP and the gateway. You need to specify a floating IP range for the guests and a gateway to route out.
Mission_Control will assign floating IPs to the KVM guests. Those floating IPs should be able to reach the hypervisor's IP and the gateway. You need to specify a floating IP range for the guests and a gateway to route out.

**NAT IPs**:

Expand All @@ -54,19 +49,39 @@ You can choose to assign a floating IP or a NAT IP when launching a new guest in

Deleting the KVM guest will release both the IP (floating or NAT) and the VNC port.

Templates
---------
#*chef_server-control*

- **ssh-key**: You can add your public key to *templates/ssh_key*.
- **netmask**: Cloud-Control is setup to work on with Class C IPs, therefore the netmask is hard coded to: 255.255.255.0
This is a Docker container that come with Chef Server 11 already installed. Mission_Control will download and launch this container if you want to. It will also grab the Knife admin keys and configure the Rundeck user to be able to use Knife.

When an Ubuntu Cloud image is used to launch a new instance, the vm will get a static IP.
#*docker-control*

ISO's on the other hand, will get a DHCP ip.
You can manage Docker containers and images with this project.

Assumptions
-----------

- VMs will reach the internet trough the hypervisor via `br0`, edit the KVM guests' XML template to reflect your environment.
###kvm-control:

- VMs will reach the internet trough the hypervisor via `br0` if floating IP selected. While using NAT, `virbr0` will be used.

- If a guest is launched with the NAT option, `virbr0` (192.168.122.1) will be used to route out.

###chef_server-control:

- The Docker Chef_Server will be accessible via HTTPS:4443

Requirements
-----------

###KVM

The hypervisor should have `Virtual Technology` enabled. You can test this prior the installation by running:

ubuntu@cbuisson:~$ egrep -c '(vmx|svm)' /proc/cpuinfo
#Anything but 0 is good.

And after the installation:

ubuntu@cbuisson:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
86 changes: 86 additions & 0 deletions docker/rundeck_jobs.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
<joblist>
<job>
<id>1804c8af-d5b1-42d8-be98-6107e8c1506d</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<exec>docker rmi $(docker images -q)</exec>
</command>
</sequence>
<description>Delete all Docker images.</description>
<name>Delete all images</name>
<context>
<project>docker-control</project>
</context>
<uuid>1804c8af-d5b1-42d8-be98-6107e8c1506d</uuid>
</job>
<job>
<id>862b23c4-b269-49d7-b47b-784102a83997</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<exec>docker rm -f $(docker ps -q)</exec>
</command>
</sequence>
<description>Delete all stopped containers!</description>
<name>Delete all non-running containers</name>
<context>
<project>docker-control</project>
</context>
<uuid>862b23c4-b269-49d7-b47b-784102a83997</uuid>
</job>
<job>
<id>cd820882-15bf-4f30-b7b3-ec4e60dd548a</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash
docker ps -a]]></script>
</command>
</sequence>
<description>List them all!</description>
<name>List all containers</name>
<context>
<project>docker-control</project>
</context>
<uuid>cd820882-15bf-4f30-b7b3-ec4e60dd548a</uuid>
</job>
<job>
<id>bbf54c33-fc96-48eb-a61e-f887917e6abe</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash
docker ps]]></script>
</command>
</sequence>
<description>List only the running containers</description>
<name>List all running containers</name>
<context>
<project>docker-control</project>
</context>
<uuid>bbf54c33-fc96-48eb-a61e-f887917e6abe</uuid>
</job>
<job>
<id>0ad015fb-9883-4f11-850e-effd25517d37</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash
docker images]]></script>
</command>
</sequence>
<description>List all local images</description>
<name>List all images</name>
<context>
<project>docker-control</project>
</context>
<uuid>0ad015fb-9883-4f11-850e-effd25517d37</uuid>
</job>
</joblist>
96 changes: 96 additions & 0 deletions docker/template/rundeck_jobs-chef.xml.erb
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
<joblist>
<job>
<id>a37ea70a-b691-4fad-a78e-f049000850eb</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash

image_id=`docker images |grep cbuisson/chef-server |awk '{print $3}'`
docker rmi -f $image_id

echo "cbuisson/chef-server image has been deleted!"]]></script>
</command>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash

container_id=`docker ps -a |grep <%= CHEF_SERVER_CONTAINER_NAME %> |awk '{print $1}'`
docker rm -f $container_id

echo "<%= CHEF_SERVER_CONTAINER_NAME %> container has been deleted!"]]></script>
</command>
</sequence>
<description>Find <%= CHEF_SERVER_CONTAINER_NAME %> image and container ID then delete all the files.</description>
<name>Remove <%= CHEF_SERVER_CONTAINER_NAME %> container</name>
<context>
<project>chef_server-control</project>
</context>
<uuid>a37ea70a-b691-4fad-a78e-f049000850eb</uuid>
</job>
<job>
<id>a82e287f-e142-40d8-ad29-f065ac2e2893</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash

container_id=`docker ps -a |grep <%= CHEF_SERVER_CONTAINER_NAME %> |awk '{print $1}'`
docker start $container_id
sleep 5
echo "<%= CHEF_SERVER_CONTAINER_NAME %> container has been started!"

container_ip=`docker inspect $container_id | grep IPAddress | cut -d '"' -f 4`
full_line="chef_server_url 'https://$container_ip:4443'"
old_ip=`cat /var/lib/rundeck/.chef/knife.rb |grep https`

if [[ $full_line != $old_ip ]];then
cat > /var/lib/rundeck/.chef/knife.rb << EOL
log_level :info
log_location STDOUT
cache_type 'BasicFile'
node_name 'admin'
client_key '~/.chef/admin.pem'
validation_client_name 'chef-validator'
validation_key '/var/lib/rundeck/.chef/chef-validator.pem'
chef_server_url 'https://$container_ip:4443'
EOL

echo -e "Updated rundeck knife.rb with new container IP!\n"
echo -e "The IP have changed for that container!\nPlease run the command below and update your ~/.chef/knife.rb to macth the current Chef-Server IP!\n"
echo -e "\e[1;36m sudo mission_control/scripts/update_chef_ip.rb\e[0m"
fi
]]></script>
</command>
</sequence>
<description>Start container: <%= CHEF_SERVER_CONTAINER_NAME %></description>
<name>Start <%= CHEF_SERVER_CONTAINER_NAME %></name>
<context>
<project>chef_server-control</project>
</context>
<uuid>a82e287f-e142-40d8-ad29-f065ac2e2893</uuid>
</job>
<job>
<id>e6246fc8-9101-413c-a8e2-c2300b01aca8</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<scriptargs />
<script><![CDATA[#!/bin/bash

container_id=`docker ps -a |grep <%= CHEF_SERVER_CONTAINER_NAME %> |awk '{print $1}'`
docker stop $container_id

echo "<%= CHEF_SERVER_CONTAINER_NAME %> container has been stopped!"]]></script>
</command>
</sequence>
<description>Shutdown container: <%= CHEF_SERVER_CONTAINER_NAME %></description>
<name>Stop <%= CHEF_SERVER_CONTAINER_NAME %></name>
<context>
<project>chef_server-control</project>
</context>
<uuid>e6246fc8-9101-413c-a8e2-c2300b01aca8</uuid>
</job>
</joblist>
Loading

0 comments on commit 8e71b16

Please sign in to comment.