Less automated than the Simple Flow type, Legacy Flows involve establishing the Collection Information
(noted below) and configuring data Delivery for this Collection within a designated timeframe. A Legacy
Flow requires that you start any crawl run manually. During this process, Snapshots (extracted data)
are captured. While currently the default Flow type, Legacy is slated for deprecation.
From the left navigation pane, click Flows.
From the top right of the Flows page, click the Add a Flow icon or plus (+) symbol.
From the New Flow page, enter text in the Name field.
The name you enter will autofill the Slug/ID field. You can associate Slug/IDs with many DOC
platform objects, which serve as self-defined identifiers. Slug/IDs can be useful as you
reference APIs or create variable names. You cannot change Slug/IDs. That noted, ensure
they are meaningful.
Use the drop-down arrow to select a Type.
You may choose from the Legacy, Simple, and Chained Flow types.
Ignore or clear the Active checkbox.
This selected checkbox ensures Deliveries run as scheduled, per the Cron Expression
visible after you adjust the toggle switch: Do you want to add scheduling? The Active checkbox is selected by default. If you clear this checkbox, no Deliveries will run even if specified in the
Cron Expression. Clearing the Active checkbox provides a method by which to maintain the Flow
while not running scheduled Deliveries. In this case, you could run the Flow manually (as needed)
by clicking the Run Flow button located at the top right of the page specific to that configured Flow.
Use the drop-down arrow to select a Collection.
A Collection is a group of Sources (or Extractors) that adheres to a particular Schema.
A Collection represents the method by which to group Sources. Your choice will align with the
Collection that contains the group of Extractors whose data you want to send to the customer
Adjust, as needed, the toggle switch: Do you want to add scheduling?
Sliding the toggle switch to the right or ON position triggers display of the Minutes, Hours,
Day (Month), Month, and Day (Week) fields, which are visible in the Cron Expression section.
Each has specific entry criteria, evident as you access each field.
The content you enter in this section is used primarily for Service Level Agreement (SLA) purposes.
As such, if the actions associated with the values you enter in the fields below are not satisfied,
there is no significant impact. For example, if you enter “1” in the Hours to Collect Data field
(and the crawl run does not complete within this time period), this action may be noted on the
Delivery Snapshot page; however, there is no other consequence. In general, certain timelines
and metrics are tracked in the DOC environment. This information appears on Dashboards and is
evaluated by internal staffers.
Enter a value in the Hours to Collect Data field.
This value represents the time period (from start to finish) required to retrieve the extracted data.
This value is the crawl run completion timeframe and serves as the entire collection window for all
Snapshots in a Delivery.
Enter a value in the Hours from Start to Destination Push field.
This value represents the timeframe from crawl run start to push to Destination. When a Snapshot
transitions from Passed_QA to Pushed or Completed, it triggers a Destination push which moves
customer files to the specified Destination. On the Delivery Snapshot page, a selected
Pushed in Window checkbox indicates that this timeframe was satisfied; in addition, the
Collected in Window column indicates the percentage of data retrieved during this period.
Enter a value in the Hours to Finish field.
This value represents the total Delivery window, from extraction start to Destination push for all
Snapshots in the Delivery. Not every Snapshot begins at the same time. They normally are staggered
to eliminate potential strains on the system. After this timeframe lapses, Delivery is moved
to a Closed state; no in-progress/in-flight action is canceled.
To share helpful information about the Flow with team members, enter text in the README section.
This section, which supports the markdown syntax, allows you to provide additional Flow context and insight.
To store content, click Save. To disregard, click Cancel.
A Snapshot's time limit is calculated when a Snapshot has a collectBy value. The collectBy value is calculated based on the difference between the current time and the collectBy time on the Snapshot (in minutes). The time limit is added to the crawl run object associated with the Snapshot. The msPerInput (how long each input is expected to take to complete in the expected timeframe) is only calculated for a crawl run on a Snapshot when a stopBy value is present on the Snapshot. The msPerInput for a crawl run is calculated based on the difference between the current time and the stopBy time on the Snapshot then dividing that value by the number of inputs. If the current time is after the stopBy (a delivery is running late), the value used to calculate msPerInput is based on the deliverBy value.
|You cannot delete Flows.|