Skip to content
Arpit Gupta edited this page Aug 18, 2014 · 1 revision

Experimental Setup

![Experimental Setup] (https://docs.google.com/drawings/d/1aO6wxbl6jv7nfOHl8kKbb7I3vJYeWRar4Yv_6uYGeSE/pub?w=808&h=579)

The setup consists of 3 participants (participating ASes) A, B and C. These participants have the following routers:

Router A1, Router B1, Router C1, and Router C2

These routers are running the zebra and bgpd daemons, part of the Quagga routing engine. We've used the MiniNext emulation tool to create this topology.

Visit the following sites to learn more about Mininet, MiniNext, and Quagga projects:

Configuring the Setup

The experiment needs two types of configurations: the control plane (SDX controller), and the data plane (Mininet topology).

  • Control Plane Configurations

The control plane configuration involves defining participant's policies, configuring bgp.conf for SDX's route server (based on ExaBGP), configuring 'sdx_global.cfg' to provide each participant's information to the SDX controller.

In this example, participant A has outbound policies defined in /examples/gec20_demo/control_plane/participant_policies/participant_A.py as:

prefixes_announced=bgp_get_announced_routes(sdx,'A')
final_policy= (
		  (match(dstport=80) >> sdx.fwd(participant.peers['B']))+
                  (match(dstport=4321) >> sdx.fwd(participant.peers['C']))+
                  (match(dstport=4322) >> sdx.fwd(participant.peers['C']))+
                  (match_prefixes_set(set(prefixes_announced)) >> sdx.fwd(participant.phys_ports[0]))
              )

Participant C has inbound policies as defined in /examples/gec20_demo/control_plane/participant_policies/participant_C.py as:

prefixes_announced=bgp_get_announced_routes(sdx,'C')
final_policy= (
		  (match(dstport=4321) >> sdx.fwd(participant.phys_ports[0]))+
                  (match(dstport=4322) >> sdx.fwd(participant.phys_ports[1]))
              )

Participant B has no policy.

  • Data Plane Configurations

In our experimental setup, we need edge routers running a routing engine to exchange BGP paths. Running a routing engine requires a particular filesystem that is not supported for the Mininet nodes. MiniNext enables filesystem for its nodes enabling emulation of legacy switches, routers etc.

For our example, the MiniNext script is described in /examples/gec20_demo/data_plane/sdx_mininext.py. In Mininext, each host is set up as follows:

"Set Quagga service configuration for this node"
            quaggaSvcConfig = \
            { 'quaggaConfigPath' : scriptdir + '/quaggacfgs/' + host.name }

            "Add services to the list for handling by service helper"
            services = {}
            services[quaggaSvc] = quaggaSvcConfig
            "Create an instance of a host, called a quaggaContainer"
            quaggaContainer = self.addHost( name=host.name,
                                            ip=host.ip,
					    mac=host.mac,
                                            services=services,
                                            privateLogDir=True,
                                            privateRunDir=True,
                                            inMountNamespace=True,
                                            inPIDNamespace=True)
            "Attach the quaggaContainer to the IXP Fabric Switch"
            self.addLink( quaggaContainer, ixpfabric , port2=host.port)

Interfaces for each of the participating routers are configured as follows:

print "Configuring participating ASs\n\n"
    for host in hosts:
	if host.name=='a1':
		host.cmdPrint('sudo ifconfig lo:1 100.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig lo:2 100.0.0.2 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig lo:110 110.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig -a')  
	if host.name=='b1':
		host.cmdPrint('sudo ifconfig lo:140 140.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig lo:150 150.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig -a')  
	if host.name=='c1':
		host.cmdPrint('sudo ifconfig lo:140 140.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig lo:150 150.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig -a')  
	if host.name=='c2':
		host.cmdPrint('sudo ifconfig lo:140 140.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig lo:150 150.0.0.1 netmask 255.255.255.0 up')
		host.cmdPrint('sudo ifconfig -a') 

The SDX route server (which is based on ExaBGP) runs in the root namespace. We need to create an interface in the root namespace itself and connect it with the SDX switch.

connectToRootNS( net, switch,'172.0.255.254/16', [ '172.0.0.0/16' ] )

Setup Steps

You'll need the script sdx-setup.sh to run the experiment. This script is located in the directory, ~/pyretic/pyretic/sdx/scripts.

Start 3 separate consoles on the controller VM one for each SDX controller, mininet CLI, and ExaBGP.

  • Console 1

Initialize the controller configurations.

mininet@mininet-vm:~/pyretic/pyretic/sdx/scripts$ ./sdx-setup.sh init gec20_demo

Here gec20_demo is the name of the application we are running. It is defined in the examples directory, /home/mininet/pyretic/pyretic/sdx/examples/.

Clear the existing rib files for all the participants.

mininet@mininet-vm:~/pyretic/pyretic/sdx/scripts$ ./sdx-setup.sh clearrib

Start the SDX controller.

mininet@mininet-vm:~/pyretic/pyretic/sdx/scripts$ ./sdx-setup.sh pyretic
  • Console 2

Launch the mininet script.

mininet@mininet-vm:~/pyretic/pyretic/sdx/scripts$ ./sdx-setup.sh mininet gec20_demo

This command launches emulation of 3 SDX participants connected to SDX. Note that participant “C” has two ports connected at the SDX switch.

Start the xterm for each SDX participant:

mininet> xterm a1 b1 c1 c2 
  • Console 3

Launch the SDX’s route server.

mininet@mininet-vm:~/pyretic/pyretic/sdx/scripts$ ./sdx-setup.sh exabgp

Sanity Checks

Check if participants received the routes from route server (shown for a1).

mininet> a1 route -n

Test SDX Policies

On hosts “b1”, “c1”, & “c2”, run nc to listen for telnet connections. We'll test for the port numbers, 80, 4321, and 4322.

root@mininet-vm:~# sudo nc -l 140.0.0.1 <port#>

On host “a1”, run telnet to test for connectivity. We'll test for port numbers, 80, 4321, and 4322.

root@mininet-vm:~# sudo telnet -b 100.0.0.1 140.0.0.1 <port#>

You should receive connections for port 80 on “b1”, 4321 on “c1”, and 4322 on “c2”, respectively.

Similarly you can also use Iperf to test this setup. On hosts “b1”, “c1”, & “c2”, start the Iperf server. We'll test for same set of ports as above.

root@mininet-vm:~# iperf -s -B 140.0.0.1 -p <port#>

On host “a1”, run the Iperf client to test for bidirectional connectivity.

root@mininet-vm:~# iperf -c 140.0.0.1 -B 100.0.0.1 -p <port#> -t 2

You should make similar observations as above.