-
Notifications
You must be signed in to change notification settings - Fork 37
Architecture
- pyFF is a SAML metadata processor.
- pyFF can be used to fetch, validate, verify, sign, transform, store, index and search SAML metadata.
- pyFF does its work by following small programs called pipelines written in yaml syntax consisting of steps/primitives called pipes
Deploying pyFF typically means figuring out what SAML metadata to fetch and from where, how to transform it to suit your metadata consumers expectations and needs and finally publish the metadata. All these steps are encoded in a yaml-file called a pipeline which pyFF executes like a program.
The following diagram illustrates the relationships between the elements that make up the pyFF execution model.
- a pipeline (consisting of pipes) transforms an initial state to an object - typically a piece of SAML metadata
- there are no loops in the pyFF pipeline programming language so all pipelines will probably terminate
- the pyFF metadata store is updated using the load pipe and queried using the select step.
Most pyFF pipelines will contain at least one select statement and one load statement. The latter is used to fetch metadata (either local or remote) and the former is used to populate the active tree which subsequent pipes modify to form the resulting object.
There are two ways to run pyFF:
- Use the pyff command to execute a pipeline. This is sometimes called batch or one-shot mode. In this case pyFF runs through the steps of the pipeline and terminates when done.
- Use the pyff server (either via pyffd or via the pyff api wsgi application). In this mode pyFF starts a service which can be access either as a web application or a REST api or both. The API is based on the Metadata Query Protocol (draft-young-md-query) along with a small set of Extensions to MDQ.
In the latter case the way pyFF responds to requests via the REST api is determined by the pipeline which usually means splitting the pipeline into sections using the when pipe. Exactly which section is called is determined by the initial state (cf above). Normally the update state is used to trigger a set of load-transform steps used to populate the metadata store while the request state is used to generate a response to an MDQ query. The way pyFF responds to MDQ is completely under the control of the pipeline and is therefore highly customizable.
While pyFF can be deployed using a standard python virtualenv and ./setup install, it is highly recommended (esp for beginners) to deploy pyFF using a docker-container.