pyFF has two command-line tools: pyff and pyffd.
# pyff --loglevel=INFO pipeline.fd [pipeline2.fd] # pyffd --loglevel=INFO pipeline.fd [pipeline2.fd]
pyff operates by setting up and running “pipelines”. Each pipeline starts with an empty “active repository” - an in-memory representation of a set of SAML metadata documents - and an empty “working document” - a subset of the EntityDescriptor elements in the active repository.
The pyffd tool starts a metadata server with an HTTP-based interface for viewing and downloading metadata. The HTTP interface can produce XML, HTML and JSON output (aswell as other formats with a bit of configuration) and implements the MDX specification for online SAML metadata query.
Pipeline files are yaml document representing a list of processing steps:
- step1 - step2 - step3
Each step represents a processing instruction. pyFF has a library of built-in instructions to choose from that include fetching local and remote metadata, xslt transforms, signing, validation and various forms of output and statistics.
Processing steps are called pipes. A pipe can have arguments and options:
- step [option]*: - argument1 - argument2 ... - step [option]*: key1: value1 key2: value2 ...
Typically options are used to modify the behaviour of the pipe itself (think macros), while arguments provide runtime data to operate on.
Documentation for each pipe is in the
pyff.pipes.builtins Module. Also take a look at the Examples.