A single data pipeline and schedule might not be sufficient for all use cases. Therefore, an application can define more than one state, each associated with it's own STM schedule.
Different schedules can vary in the set of nodes they include (beside other schedule properties) which effectively changes the data pipeline since parts of the data pipeline are enabled / disabled. But the instantiated nodes and channels effectively don't change.
The request to change the state during runtime is coming from the System State Manager. If an application has more than one state, an additional process is being invoked by the launcher. The schedule manager process connects to SSM and waits for state change requests.
The extended system context - highlighting the new entities - is depicted in the following diagram:
When a state change a request is received the schedule manager can respond in two different ways:
Schedule manager provides two ways to perform schedule switching once a state change is accepted (this is configurable).
The switching happens in three sequential steps:
dw::framework::Node::setState()
). Nodes shouldn't perform any long running task in this function to make the schedule switch as fast as possible.
After the schedule switch has been completed SSM is being notified about it.
The following diagram illustrates the schedule manager and how it interacts with SSM as well as with the STM master and the nodes when SSM requests a state change.
Note: atm the schedule manager doesn't perform the three steps asynchronously but instead synchronously within the state change callback. As a consequence the "Accept Request" response signaled by returning
true
from that callback only happens at the very end, after the "State Change Completed" has been sent to SSM.
In Rolling schedule switch, the schedule manager requests a schedule change to the STM by using per-hyperepoch/rolling schedule switch API.
This STM API will stop hyperepochs in the current schedule as they finish while starting hyperepochs in new schedule. So the dead time is minimized.
This implies that two hyperepochs can be running two different schedules/states at the same time while the STM API finishes rolling over all the hyperepochs over to the new schedule.
As the schedules are switched, STM informs the clients which hyper epochs are about to start the next schedule. And CGF notifies all nodes in the hyperepoch about the new state name (see dw::framework::Node::setState()
).
After all the hyper-epochs are switched, STM API returns success, and schedule manager notifies SSM that state change is complete.
The schema for .app.json files captures the states and schedules assigned to each state under states
and stmSchedules
.