A single data pipeline and schedule might not be sufficient for all use cases. Therefore, an application can define more than one state, each associated with it's own STM schedule.
Different schedules can vary in the set of nodes they include (beside other schedule properties) which effectively changes the data pipeline since parts of the data pipeline are enabled / disabled. But the instantiated nodes and channels effectively don't change.
The request to change the state during runtime is coming from the System State Manager. If an application has more than one state, an additional process is being invoked by the launcher. The schedule manager process connects to SSM and waits for state change requests.
The extended system context - highlighting the new entities - is depicted in the following diagram:
When a state change a request is received the schedule manager can respond in two different ways:
The switching happens in three sequential steps:
dw::framework::Node::setState()
). Nodes shouldn't perform any long running task in this function to make the schedule switch as fast as possible.
After the schedule switch has been completed SSM is being notified about it.
The following diagram illustrates the schedule manager and how it interacts with SSM as well as with the STM master and the nodes when SSM requests a state change.
Note: atm the schedule manager doesn't perform the three steps asynchronously but instead synchronously within the state change callback. As a consequence the "Accept Request" response signaled by returning
true
from that callback only happens at the very end, after the "State Change Completed" has been sent to SSM.
The schema for .app.json files captures the states and schedules assigned to each state under states
and stmSchedules
.