02. Kepler Directors

Directors for Kepler Workflows

This tutorial is designed to introduce the concept of Kepler directors, outline the different directors that can be used, and present example workflows.

This tutorial assumes that you have basic Kepler knowledge and can use Kepler on the Gateway machine. Please see the 01. Introduction to Kepler - basics tutorial if you need more information.


A Kepler director is a specific type of Kepler actor which controls (or directs) the execution of a workflow. As discussed in the basic training Kepler actors are used to provide the functionality (what processing occurs in a workflows) whereas directors are used to orchestrate and execute actors to run the whole workflow (when something happens). The actors take their execution instructions from the director.

Every workflow must have a director that controls the execution of the workflow using a particular model, or domain, of computation. For example, workflows can be synchronous (i.e. with processing occurring one actor at a time in a pre-defined pattern) and for this the SDF Director would be ideal. Alternatively, if a workflow requires actors to be executed in parallel (i.e. one or more actors running simultaneously) then the PN Director would be used. The director used for a workflow is chosen at design time depending on the particular requirements of the workflow being created.

Kepler has a number of pre-defined directors and the underlying Ptolomy II system has more (which are generally not exposed to Kepler users but can be used if required). This tutorial focuses on three specific directors, detailed in the table below, which are required for standard scientific workflows.

Director Description
Generally used to oversee fairly simple, sequential workflows.
As previously mentioned this is designed for workflows that require the simultaneous execution of multiple actors.
This is generally used for workflows that use looping, branching, etc... (but not simultaneous actor execution where PN director should be used).

Deciding which director to use depends on what functionality is required for a given workflow. The next sections outline the specifics of the three directors we are focusing on to give an idea of which one to chose for a given situtation. Generally the chose depends on what requirements there are for control flow/condition structures in the workflow, whether there is any requirement for running multiple actors simultaneously, and what the performance requirements on the workflow are.

SDF Director

The Synchronous Dataflow (SDF) Director executes a single actor at a time with one thread of execution, scheduling the whole workflow execution at the very beginning (with no scope for changing it during the execution). It is very efficient and as such it does not consume much of the system resources as it can precalculate the schedule for actor execution prior to execution (which also enables a fixed memory size of the workflow execution and ensures the workflow does not deadlock). However, this is dependent on the flow of execution through actors being fixed and the data consumption and production of actors being constant (i.e. actors always produce and/or consume the same amount and type of data).

Furthermore, the SDF domain cannot have feedback loops (where the output of an actor feeds back into an input of the same actor) as deadlock would occur. However, this can be addressed where such loops are required by adding delay actors which provide the first input to the feedback look. Additionally, . Therefore, it is sufficient for simple tasks which are not conditional. It also requires all actors in the workflow to be connected (although it is possible to have unconnected actors but it requires some modifications to the director parameters).

By default this director executes the workflow exactly once. This can be changed by altering the director parameters and setting the iterations parameter to the required value.

Using SDF Director

SDF Example (approx. 5 min)

This exercise presents a simple workflow which uses the SDF director. The workflow we will used is the VomsProxy workflow. For information on what this workflow does and how to use it please see the Generating VOMS proxy section in the tutorial on Grid and HPC Workflows. For the purpose of this tutorial we are only interested in the SDF director component.

To investigate the SDF director please perform the following steps.

1. Start Kepler application by issuing:


2. Open "vomsproxy" workflow by issuing: File -> Open and navigate to:


3. You should see a fully connected actor network which includes a SDF director, as pictured below.

If you double click with your left mouse button on the SDF director you should see the parameters which can be changed for the director (as pictured below).

DDF Director

The Dynamic Dataflow (DDF) Director is often used for workflows that require looping or branching or other control structures, but that do not require parallel processing (in which case a PN Director should be used). It schedules execution iteratively by searching for ready actors which means that parts of the workflow can be executed conditionally, repeatedly, etc... By defaultis uses a set of rules that determine how to execute actors in a "basic iteration." Unlike the SDF director, which calculates the order in which actors execute and how many times each actor is executed before execution begins, this determines what actors to run at runtime, and the amount of data produced and output by each actor can vary in each basic iteration. This makes for a very flexible workflow and as such is one of the most commonly used directors.

In the DDF domain each actor has a set of execution pattern and can be fired executed if one of
them is satisfied (i.e. it is possible to create a set different execution triggers based on different inputs being available). It is generally used if conditional or flow control workflows are required, such as if-else or while/do-while, but where there is no requirement for actors to be executed concurrently.

Using DDF Director

DDF Example (approx. 5 min)

This exercise uses a basic workflow which uses the DDF director. The workflow we will used is a simple example workflow constructed to produce while and do while behaviour in a Kepler workflow. The workflow encorporates actors and parameters that enable a user to specify the start and finish points for the while and do while loops and then execute both loops at once. It also displays each iteration of the loops. Whilst this workflow does not do any work in the loop iterations it can be seen that it would be straightforward to add actors which undertake work on each loop iteration.

1. Start Kepler application by issuing:


2. Open "while_loop" workflow by issuing: File -> Open and navigate to:


3. You should see the DDF domain workflow shown in the image below:

Notice the SampleDelay actor which is required to ensure that the workflow is not deadlocked on the first iteration

4. Run the workflow by pressing the play button and investigate the output produced.

Advanced Topic

5. Try removing the sample delay actor and re-running the workflow. To do this left click on the actor so it is highlighted in yellow and then press delete.

6. Now add the actor back again. To do this you will first need to search for the actor in the component library. Type "SampleDelay" into the search box in the top left hand corner of Kepler (see image below for future guidance).

7. Drag the actor that has been found onto the workflow by left clicking the SampleDelay text, holding the mouse button and dragging then releasing the mouse button when it is in the desired location.

8. Before the actor can be added to the network it will need its ports correct aligned. Right click on the actor and chose "Customise Ports" option. Change the Direction from DEFAULT to EAST for the input port and WEST for the output port and then click commit (see image below for future guidance)..

9. The actor is now ready to re-connect to the network. Click and drag from the output port on the Is Present actor to the input port on the SampleDelay actor. Do the same from the output of SampleDelay to the input of DDF Boolean Select.

10. Finally the initial value for the SampleDelay needs to be set. Double-click the actor with the left mouse button and change the initalOutputs parameter from {0} to {false}

PN Director

Under a Process Network (PN) director, every actor gets its own execution thread and the director does not pre-define the execution schedule (or firing schedule) of the actors (as is done in the SDF director). The workflow is driven by when data is available: tokens are created on output ports whenever input tokens are available and output can be calculated. Execution is only finished when there are no new data token sources anywhere in the workflow.

Because PN workflows are very loosely coupled, they are natural candidates for managing workflows that require parallel processing on distributed computing systems, however it is potentially the most difficult and error-prone director to use as it does not guarantee any synchronisation or determinism. It is potentially one of the most powerful Kepler domains as there are few restrictions but can be inefficient as the director must keep checking for actors that have sufficient data to execute.

The same execution process that gives the PN Director its flexibility can also lead to some unexpected results: workflows may refuse to automatically terminate because data is always generated and available to receiving actors (i.e. a constant actor will always produce data by default). If one actor fires at a much higher rate than another, a receiving actor's memory buffer may overflow, causing the workflow execution to fail.

The PN director is generally not necessary for scientific workflows so we have not provided an example of using this director. However, if you are interested in using PN or think you have a requirement for it please contact EUFORIA support for more information.

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.