-
Notifications
You must be signed in to change notification settings - Fork 37
Architecture
Leif Johansson edited this page Apr 19, 2019
·
16 revisions
- pyFF is a SAML metadata processor.
- pyFF can be used to fetch, validate, verify, sign, transform, store, index and search SAML metadata.
- pyFF does its work by following small programs called pipelines written in yaml syntax consisting of steps/primitives called pipes
Deploying pyFF typically means figuring out what SAML metadata to fetch and from where, how to transform it to suit your metadata consumers expectations and needs and finally publish the metadata. All these steps are encoded in a yaml-file called a pipeline which pyFF executes like a program.
The following diagram illustrates the relationships between the elements that make up the pyFF execution model.
- a pipeline (consisting of pipes) transforms an initial state to an object - typically a piece of SAML metadata
- there are no loops in the pyFF pipeline programming language so all pipelines will probably terminate
- the pyFF metadata store is updated using the load pipe and queried using the select step.
Most pyFF pipelines will contain at least one select statement and one load statement. The latter is used to fetch metadata (either local or remote) and the former is used to populate the active tree which subsequent pipes modify to form the resulting object.
There are two ways to deploy pyFF:
- Use the pyff command to execute a pipeline. This is sometimes called batch or one-shot mode. In this case pyFF runs through the steps of the pipeline and terminates when done.
- Use the pyff server (either via pyffd or via the pyff api wsgi application). In this mode pyFF starts a service which can be access either as a web application or a REST api or both. The API is based on the Metadata Query Protocol (draft-young-md-query) along with a small set of [extensions](MDQ Extensions).