• <xmp id="om0om">
  • <table id="om0om"><noscript id="om0om"></noscript></table>
  • Compute Graph Framework SDK Reference  5.22
    All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Modules Pages
    Schedule Switching

    A single data pipeline and schedule might not be sufficient for all use cases. Therefore, an application can define more than one state, each associated with it's own STM schedule.

    Different schedules can vary in the set of nodes they include (beside other schedule properties) which effectively changes the data pipeline since parts of the data pipeline are enabled / disabled. But the instantiated nodes and channels effectively don't change.

    The request to change the state during runtime is coming from the System State Manager. If an application has more than one state, an additional process is being invoked by the launcher. The schedule manager process connects to SSM and waits for state change requests.

    Extended System Context

    The extended system context - highlighting the new entities - is depicted in the following diagram:

    System Context with SSM

    State Change Request

    When a state change a request is received the schedule manager can respond in two different ways:

    • reject the state change request (e.g. because the requested schedule is not ready since not all nodes part of that schedule have been loaded yet) or
    • accept the state change request and start the switching.

    Schedule manager provides two ways to perform schedule switching once a state change is accepted (this is configurable).

    • Synchronized Schedule Switch
    • Rolling Schedule Switch

    Synchronized Schedule Switch

    The switching happens in three sequential steps:

    1. The current STM schedule is being stopped. Atm STM can only stop the schedule at hyperepoch boundaries. This implies that stopping an ongoing schedule might take up to the length of the longest hyperepoch of the current schedule.
    2. All nodes are being notified of the new state name (see dw::framework::Node::setState()).
      Nodes shouldn't perform any long running task in this function to make the schedule switch as fast as possible.
      
    3. The new (or potentially the same) STM schedule is being started.

    After the schedule switch has been completed SSM is being notified about it.

    The following diagram illustrates the schedule manager and how it interacts with SSM as well as with the STM master and the nodes when SSM requests a state change.

    Synchronized Schedule Switching in 3 Steps

    ‍Note: atm the schedule manager doesn't perform the three steps asynchronously but instead synchronously within the state change callback. As a consequence the "Accept Request" response signaled by returning true from that callback only happens at the very end, after the "State Change Completed" has been sent to SSM.

    Rolling Schedule Switch

    In Rolling schedule switch, the schedule manager requests a schedule change to the STM by using per-hyperepoch/rolling schedule switch API.

    This STM API will stop hyperepochs in the current schedule as they finish while starting hyperepochs in new schedule. So the dead time is minimized.

    This implies that two hyperepochs can be running two different schedules/states at the same time while the STM API finishes rolling over all the hyperepochs over to the new schedule.

    As the schedules are switched, STM informs the clients which hyper epochs are about to start the next schedule. And CGF notifies all nodes in the hyperepoch about the new state name (see dw::framework::Node::setState()).

    After all the hyper-epochs are switched, STM API returns success, and schedule manager notifies SSM that state change is complete.

    Rolling Schedule Switching

    References

    The schema for .app.json files captures the states and schedules assigned to each state under states and stmSchedules.

    人人超碰97caoporen国产