Skip to content

Latest commit

 

History

History
107 lines (72 loc) · 5.38 KB

README.md

File metadata and controls

107 lines (72 loc) · 5.38 KB

SDN solutions for Openshift

Software to get an overlay network up and running for a docker cluster. This is still a work in progress. Do not use it in production.

Build and Install

$ git clone https://github.com/openshift/openshift-sdn
$ cd openshift-sdn
$ make clean        # optional
$ make              # build
$ make install      # installs in /usr/bin

Try it out

Use vagrant, pre-define a cluster, and bring it up

Create an openshift cluster on your desktop using vagrant:

$ git clone https://github.com/openshift/origin
$ cd origin
$ make clean
$ export OPENSHIFT_DEV_CLUSTER=1
$ export OPENSHIFT_NUM_MINIONS=2
$ export OPENSHIFT_SDN=ovs-simple
$ vagrant up
Manually add minions to a master

Steps to create manually create an OpenShift cluster with openshift-sdn. This requires that each machine (master, minions) have compiled openshift and openshift-sdn already. Check here for OpenShift instructions. Ensure 'openvswitch' is installed and running (yum install -y openvswitch && systemctl enable openvswitch && systemctl start openvswitch). Also verify that the DOCKER_OPTIONS variable is unset in your environment, or set to a known-working value (e.g. DOCKER_OPTIONS='-b=lbr0 --mtu=1450 --selinux-enabled'). If you don't know what to put there, it's probably best to leave it unset. :)

On OpenShift master,

$ openshift start master [--nodes=node1]  # start the master openshift server (also starts the etcd server by default) with an optional list of nodes
$ openshift-sdn           # assumes etcd is running at localhost:4001

To add a node to the cluster, do the following on the node:

$ openshift-sdn -etcd-endpoints=http://openshift-master:4001 -minion -public-ip=<10.10....> -hostname <hostname>
where, 
	-etcd-endpoints	: reach the etcd db here
	-minion 	: run it in minion mode (will watch etcd servers for new minion subnets)
	-public-ip	: use this field for suggesting the publicly reachable IP address of this minion
	-hostname	: the name that will be used to register the minion with openshift-master
$ openshift start node --master=https://openshift-master:8443

Back on the master, to finally register the node:

Create a json file for the new minion resource
    $ cat <<EOF > minion-1.json
{
	"kind":"Minion", 
	"id":"openshift-minion-1",
 	"apiVersion":"v1beta1"
}
EOF
where, openshift-minion-1 is a hostname that is resolvable from the master (or, create an entry in /etc/hosts and point it to the public-ip of the minion).
$ openshift cli create -f minion-1.json

Done. Repeat last two pieces to add more nodes. Create new pods from the master (or just docker containers on the minions), and see that the pods are indeed reachable from each other.

OpenShift? PaaS? Can I have a 'plain setup' just for Docker?

Someone needs to register that new nodes have joined the cluster. And instead of using OpenShift/Kubernetes to do that, we can use 'openshift-sdn' itself. Use '-sync' flag for that.

Steps:

  1. Run etcd somewhere, and run the openshift-sdn master to watch it in sync mode.

     $ systemctl start etcd
     $ openshift-sdn -master -sync  # use -etcd-endpoints=http://target:4001 if etcd is not running locally
    
  2. To add a node, make sure the 'hostname/dns' is reachable from the machine that is running 'openshift-sdn master'. Then start the openshift-sdn in minion mode with sync flag.

     $ openshift-sdn -minion -sync -etcd-endpoints=http://master-host:4001 -hostname=minion-1-dns -public-ip=<public ip that the hostname resolves to>
    

Done. Add more nodes by repeating step 2. All nodes should have a docker bridge (lbr0) that is part of the overlay network.

Gotchas..

Some requirements, some silly errors.

  • openshift-sdn fails with errors around ovs-vsctl.. yum -y install openvswitch && systemctl enable openvswitch && systemctl start openvswitch
  • openshift-sdn fails to start with errors relating to 'network not up' etc. systemctl stop NetworkManager # that fella is nosy, does not like mint new bridges
  • openshift-sdn fails to start saying cannot reach etcd endpoints etcd not running really or not listening on public interface? That machine not reachable possibly? -etcd-endpoints=https?? without ssl being supplied? Remove the trailing '/' from the url maybe?
  • openshift-sdn is up, I think I got the subnet, but my pings do not work It may take a while for the ping to work (blame the docker linux bridge, optimizations coming). Check that all nodes' hostnames on master are resolvable and to the correct IP addresses. Last, but not the least - firewalld (switch it off and check, and then punch a hole for vxlan please).

Performance Note

The current design has a long path for packets directed for the overlay network. There are two veth-pairs, a linux bridge, and then the OpenVSwitch, that cause a drop in performance of about 40%

An optimzed solution that eliminates the long-path to just a single veth-pair bring the performance close to the wire. The performance has been measured using sockperf.

| openshift-sdn | openshift-sdn (optimized) | without overlay --- | --------- | ------- | ------ Latency | 112us | 84us | 82us

The optimized solution is available for use with OpenShift/Kubernetes only. Use '-kube' option with openshift-sdn on all hosts. And use the network_plugin for OpenShift/Kubernetes as 'redhat/openshift-ovs-subet'.

TODO

  • Network isolation between groups of containers