-
Notifications
You must be signed in to change notification settings - Fork 2
Home
This documentation is aimed at developers needing to expose a testbed using the open-multinet (aka. SFA) Aggregate Manager API, as defined in https://github.com/open-multinet/federation-am-api.
We will first discuss general considerations, to help developers evaluate whether the approach explained here is well suited to their use-case.
We will then guide developers into creating a working development environment for code available here, including information about where to look too for support.
In the open-multinet (aka. SFA) world, user and testbed management are decoupled activities. It is possible to implement the testbed specific part of the API, the aggregate manager API, without bothering with user management. This is what this documentation is about.
Of course, at some point, user management APIs and aggregate managers must cooperate. In open-multinet, this happens through x509 certificates issued to users through the user management API. Therefore, aggregate managers must be configured to recognize some x509 certificates as valid.
This tutorial will guide you through the steps required to build an aggregate manager (AM) using the Reference implementation written in the context of the GENI project, and available at https://github.com/GENI-NSF/geni-tools. It will do so by explaining how to use the delegate mechanism built into that code base to write an AM essentially as a plugin of the reference code base. The work to do is therefore two-fold :
- write the code of the plugin,
- configure the reference code base for your plugin and the x509 certificates it expects
The alternatives are:
- use a software stack to run your testbed that already implements the aggregate manager API, such a FITeagle or Emulab.
- write a plugin for SFAWrap (python) or for AMsoil.
- (not recommended) write your own modified XML-RPC stack to handle client authentication over https in a way the client certificate passed to the functions called is verified with the client https certificate, and handle geni certificate verification.
The aim of this approach is to be able to benefit from functionalities and updates to geni-tools (start up scripts, configuration options, changes and bug fixes in the low-level parts of the open-multinet protocol), while keeping the codebase for testbed specific parts of your aggregate manager separate.
The geni-tools code base also includes code to handle one implementation of the user management part of the API (the clearing house or slice authority). If you wish to also implement those at your testbed, the instructions provided here are probably not the best, in particular for all instructions about configuring certificates and certificate authorities.
Moreover, we have chosen in this tutorial to run all code inside a VM, and more specifically a vagrant VM provided over virtual box. This has the benefit of being able to run development versions of your aggregate manager code in a setting close to the one of a deployed service, without adding too many software dependencies to your workstation. It has the drawback of requiring access to your testbed from a VM on your development machine. This might not always be simple, and will be discussed later.
Finally, little effort has been directed towards providing means to update your implementation if the reference code of this tutorial is updated. You could fork this repo, clone it or copy the main files. It is up to you to devise a way of keeping track of updates in the bootstrap code while still being able to commit your work in a possibly private git repository. The tutorial will suggest cloning this repo, and working in a branch that is pushed to another repo, but this should not be understood as a canonical or even a recommended way of working.
In this tutorial, we will suppose you have a working installation of vagrant and virtualbox. It is perfectly possible to benefit from instructions here if you prefer working from a python installation on your workstation, that you will customize to support geni-tools code. Here are the 2 main arguments for the vagrant approach.
- It limits the amount of changes required on developer workstations. This is very useful if you have more than one development project, that might have conflicting requirements.
- It eases the transition from the development version of your work to a version running on your testbed, as the development VM can be made very close to the VM used to deploy. Of course, this only applies if you expect to deploy the result of your work on a VM.
In this tutorial, we will suppose you created a copy of the bootstrap code using the clone-branch model
git clone https://github.com/dmargery/bootstrap-geni-am.git
cd bootstrap-geni-am
git branch <my_testbed> # create a branch for your testbed
git checkout <my_testbed> # move into that branch
git remote add downstream <repo_for_your_code> #link this repo to the repo where you'll be pushing the code of your Aggregate Manager
git branch -u downstream/master # link the branch we checked out to our repo
git mv testbed-delegate <your testsbed's name>-delegate
You'll also need a copy of the reference code. This tutorial relies on geni-tools (previously gcf) version 2.10, yet unreleased. Make sure you are using the develop branch
cd .. #work out of the bootstrap code directory
git clone https://github.com/GENI-NSF/geni-tools.git
cd geni-tools
git branch #should be develop
git checkout develop #if previous call gave an unexpected result
cd ../bootstrap-geni-am
If you are going to host the code of your Aggregate Manager on github, you'll probably be better of forking the repo and setting https://github.com/dmargery/bootstrap-geni-am.git as upstream. A word of warning though about hosting on github : your aggregate manager will probably need to carry some secrets (certificates, ssh keys or login/password) to be able to interact with your testbed. Avoid pushing those to github.
This first part of the tutorial will guide to the point you have a working dummy AM delegate that runs code under your control and is configured with names and certificates for your testbed. The main steps required here are:
-
Create a certificate authority for your testbed
-
Create a certificate for your aggregate manager
-
Create a user certificate to ease debugging
-
configure geni-tools to
- use your delegate
- trust your certificate authority
The quick way of doing this is to customize the file named Vagrantfile
with facts for your testbed, and to run
vagrant up --provision
The more complex way is also documented in the Generating CA, AM certificate and user certificate using openssl page of this wiki. Parameters to configure geni-tools are well documented if you edit /etc/geni-tools-delegate/gcf_config
.
You can now connect to the vagrant box to check a few key files:
vagrant ssh
In /etc/geni-tools/
you'll find
- the configuration file for geni-tools
- your aggregate manager cert (
am-cert.pem
), incerts
- the certificate authorities your aggregate manager will trust, in
certs/trusted_roots
. For now you should only find a copy of your local ca certificate.
If generation of the vagrant box was successful, these will be tailored to your choices, as defined in the Vagrantfile
. You can now start the empty Aggregate Manager
cd /opt/gcf-testbed-delegate
gcf-am.py --api-version 3 -c /etc/geni-tools/gcf_config
In a second shell, we will check that everything is properly configured by attempting an access from the VM with
vagrant ssh
sudo apt-get install curl --yes
curl -kni https://192.168.150.51:5001 -d '<?xml version="1.0" encoding="ISO-8859-1"?><methodCall><methodName>GetVersion</methodName><params></params></methodCall>' --cert /home/vagrant/usercert.pem:SomeDebugPassForJoe --key /home/vagrant/userkey.pem
You should get some xml back as a result, such as
<?xml version='1.0'?>
<methodResponse>
<params>
<param>
<value><struct>
<member>
<name>output</name>
<value><string></string></value>
</member>
<member>
<name>geni_api</name>
<value><int>3</int></value>
</member>
...
</struct>
</param>
</params>
</methodResponse>
We will now check from the workstation :
# first a command to extract the debug credentials from
# the VM to the current directory
vagrant ssh -- cp usercert.pem userkey.pem /opt/gcf-testbed-delegate/
curl -kni https://192.168.150.51:5001 -d '<?xml version="1.0" encoding="ISO-8859-1"?><methodCall><methodName>GetVersion</methodName><params></params></methodCall>' --cert usercert.pem:SomeDebugPassForJoe --key userkey.pem
If you have a successful result, well done : you have a working aggregate manager setup and running in the vagrant VM, running code under your control, in testbed-delegate/testbed.py
.
Of course, the bootstrap delegate in testbed-delegate/testbed.py
does not do anything. In fact, even its code to check credentials will not work without proper SFA credentials and some work on the code. We'll see how to change this in the [Giving access to others](#giving-access-to-others paragraph).
But before this, you will need to make sure the code running as your bootstrap aggregate manager can access the testbed it gives access to. This of course depends on the security model for the existing APIs to access and control your testbed. This tutorial cannot help you much, because most of the work depends on your local configuration.
- For testbeds whose security model is based on network reachability (the control plane of the testbed is in a private subnet, all machines in that subnet are trusted), you'll need to find a way to either mock the testbed for development, create a tunnel (ssh, vpn) between the development machine and the testbed it needs to control, or develop with a VM on the trusted network.
- For testbeds whose security model is based on authenticated https access, you'll need to push the required credentials in the vagrant VM.
- For testbeds whose control plane is based on ssh access to machines, you'll need to push all required keys to the vagrant VM
The bootstrap code has an example of getting parameters from a configuration file, most notably to get the endpoint(s) the aggregate manager will be interacting with. It is up to you to use this facility to be able to write code to interact with your testbed. I cannot write much more about this topic in this tutorial.
The next step, even for development, is to configure your aggregate manager to recognize geni credentials (these are x509 certificates) as valid credentials to interact with it. In this step we will configure the aggregate manager to recognize at least your credentials as valid, so you can interact with your AM in development using real world credentials. Remember, this tutorial is built on the idea that you won't be running your own clearinghouse or slice authority, because you already have access to one. In this step we will add this as a root of trust for your aggregate manager. You can attempt to find the url of your slice authority in your credentials. I do this by reading the output of openssl x509 -in ~/.ssl/geni_cert.pem -text
, and looking at the Authority Information Access
section. But this will only give you access to the certificate used by the server for ssl connections, which is not necessarily the certificate used to sign user credentials. You'll need to find the details of this second certificate by your own means. The one I have for Fed4FIRE is located at http://users.atlantis.ugent.be/bvermeul/wall2.pem
#get the certificate, from bootstrap working dir (mounted as /vagrant in the VM)
wget http://users.atlantis.ugent.be/bvermeul/wall2.pem
#upload it to the trusted roots
vagrant ssh -- cp /opt/gcf-testbed-delegate/wall2.pem /etc/geni-tools/certs/trusted_roots
If you now restart your aggregate manager, you will be able to check in the output that it now recognises 2 authorities as roots of trust
INFO:cred-verifier:Will accept credentials signed by any of 2 root certs found in /etc/geni-tools-delegate/certs/trusted_roots/: ['/etc/geni-tools-delegate/certs/trusted_roots/wall2.pem', '/etc/geni-tools-delegate/certs/trusted_roots/ca.pem']
You are now ready to configure a standard SFA client to use your aggregate manager.
In this part of the tutorial, we will be using a component of the jFed suite of tools to access our aggregate manager. We will specifically use the probe GUI. Please refer to installation instructions on the web site. For this tutorial, I have configured the debian repo, and I started the probe with jFed-Probe
Then you can simply request that the probe calls a local aggregate manager with a self-signed certificate as in this picture:
You can view the old process to register a development am (for Grid'5000 in this demo) in this video, and then you can see how to use it