Flow Control Nodes

Flow Control Nodes

Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: Basic understanding of Data Flow concepts

1. Overview

What are Flow Control Nodes?

Flow Control nodes manipulate and control the execution path of your data flows. They enable parallel processing, branching, timing control, error handling, and resource locking - providing fine-grained solutions to address even the most complex scenarios without custom code.

Why are they important?

Enterprise integrations rarely follow a simple linear path. You may need to query multiple systems simultaneously, wait for external processes, prevent concurrent execution conflicts, or gracefully handle failures. Flow Control nodes provide the orchestration logic that makes complex workflows reliable and maintainable.

The 10 Flow Control Node Types

Node Type Primary Purpose Best For
Broadcast (DEPRECATED) Stream individual messages from array/object (For-Each) Do not use - Remove from existing flows
Fork Execute multiple branches in parallel with same payload Querying multiple systems simultaneously
Combine Recombine messages from Fork or Broadcast Merging parallel execution results
Collect Accumulate messages by time or count Batching events for bulk processing
Schedule Trigger flow on defined frequencies Scheduled jobs, polling, recurring reports
Delay Pause execution for specified time Rate limiting, waiting for external processes
Mutex Lock Acquire exclusive lock on resource Preventing concurrent processing conflicts
Mutex Unlock Release lock acquired by Mutex Lock Completing locked sections
Try Catch Catch and handle errors from downstream nodes Graceful error handling, logging failures
Throw Error Explicitly throw error with custom message Business rule violations, custom error formatting

2. Broadcast Node (DEPRECATED)

Purpose & Use Cases

The Broadcast node outputs a stream of individual messages based on an input array or object. It implements "For-Each" or "Map" style logic, with subsequent nodes operating on each individual element rather than the collection as a whole. Non-object values are wrapped into a single-element array.

⚠️ DEPRECATION WARNING: The Broadcast node is being deprecated and should NOT be used. If you have any existing flows using the Broadcast node, you should remove it and redesign those flows as soon as possible. Broadcast leads to flows that scale poorly, perform poorly, and are failure prone.

Why Broadcast Performs Poorly:

The number of requests made increases drastically with the number of records in the input payload. With the equation (Number of Parts) × (Number of Nodes) = Total Requests:

  • 10 Parts × 3 Nodes = 30 requests (acceptable)
  • 1,000 Parts × 3 Nodes = 3,000 requests (problematic)
  • Using Script Node batching: 3 requests total (optimal)

Operating Modes

For-Each Mode (Object Input): If provided an object, Broadcast sends each property as a single node execution.

Map Mode (Array Input): If provided an array of values, Broadcast sends each index of the array as a single node execution.

Required Alternative: Use a Script Node to format data for batch operations. Instead of broadcasting 1,000 items through 3 nodes (3,000 requests), format all data in one Script Node and make bulk API calls (3 requests total). All existing Broadcast nodes must be replaced with this pattern.

3. Fork Node

Purpose & Use Cases

The Fork node creates batches of messages based on branches in the flow. Each branch executes in parallel with the full payload passed to the Fork node. This allows flows to run different flow paths on the same payload simultaneously.

The forked branches can be recombined into a single array using a Combine node. When combined, the data from each path is merged into a single Object with the branch name as the property name.

Configuration Parameters

Parameter Description
Branches Add one or more branches linked to downstream nodes. Name branches to reflect downstream actions.

Fork/Combine Example

Fork 1 Output:

{"data": {"Parts": [{"Part_No": "P123", "Revision": "A"}]}}

Fork 2 Output:

{"data": {"Customer": {"name": "Acme Industries"}}}

After Combine:

{
"Fork 1": {"data": {"Parts": [{"Part_No": "P123", "Revision": "A"}]}},
"Fork 2": {"data": {"Customer": {"name": "Acme Industries"}}}
}
Best Practice: Name your branches descriptively (e.g., "Get Parts", "Get Customer") rather than using default names. This makes the combined output self-documenting.

4. Combine Node

Purpose & Use Cases

The Combine node combines batches of messages created through Fork or Broadcast nodes back into a single message. The default configuration covers most use cases.

Configuration Parameters

Parameter Default Description
Timeout (s) 10 How long to wait for all messages before erroring (seconds)
Payload Combination Strategy Index Index (array), Last (single payload), or Merge (combined object)
Context Combination Strategy Last Index (array), Last (single context), or Merge (combined context)

5. Collect Node

Purpose & Use Cases

The Collect node collects messages by time or count, emitting an array of collected messages when the threshold is reached. Useful for batching incoming events for bulk processing.

Configuration Parameters

Parameter Description
Batch Count Number of messages to collect before emitting
Batch Time (ms) Time in milliseconds to wait before emitting stored messages
Payload Combination Strategy Index (array), Last (single payload), or Merge (combined object)
Context Combination Strategy Index (array), Last (single context), or Merge (combined context)
Best Practice: Always use Batch Count in combination with Batch Time. If you set Batch Count to 5 but only receive 3 messages, the node will not emit until it receives the other 2 messages. Having both ensures messages are emitted on the timeout as a fallback.

6. Schedule Node

Purpose & Use Cases

The Schedule node is a source node used to issue a payload on a frequency or frequencies defined by Data Flow Schedules. It provides flexible cron-based scheduling with timezone support.

Configuration

Parameter Description
Configure Link opens Data Flow Schedule Editor in new browser tab

The Schedule Editor allows you to configure multiple frequencies with different cron schedules, timezones, and payloads. Key schedule properties include:

  • Name: Descriptive name for the schedule
  • Active: Enable/disable the schedule
  • Input Schema: JSON Schema defining inputs rendered on frequency dialogs
  • Frequencies: Multiple cron-based schedules with individual payloads
Note: The older Schedule Node V1 is deprecated. Use the new Schedule Node which links to the Data Flow Schedule Editor for more flexible configuration.

7. Delay Node

Purpose & Use Cases

The Delay node pauses execution of the next nodes by a specified time. Useful for rate limiting API calls, waiting for external processes to complete, or implementing polling patterns.

Configuration Parameters

Parameter Description
Delay (ms) Time in milliseconds the node should wait before executing next nodes

8. Mutex Lock / Unlock Nodes

Purpose & Use Cases

The Mutex Lock node blocks execution until an exclusive lock is acquired on a resource. This is useful when you need to ensure a portion of a flow or the entire flow can only run once for a specific resource.

Example: You have a flow that runs on a schedule every 1 minute. You don't want the flow to process the same data twice. Use a Mutex Lock node to block the next execution from running if the prior execution is still running.

Mutex Lock Configuration

Parameter Description
Resource ID Transformable field indicating the lock name. For flow-level locking, use a hard-coded string.
Lock TTL (ms) How long the lock can be held. Auto-unlocks after this time. Min: 30,000ms, Max: 300,000ms
Lock Retries Retry attempts. Set -1 for infinite retries. Set >0 for specific retry count.
Throw Error When Lock Not Acquired If enabled, throws error when lock cannot be acquired. Otherwise, message is discarded.
Critical: You MUST add a Mutex Unlock on all terminating routes of the flow, or you risk the lock persisting for the next execution until the Lock TTL expires.

Mutex Unlock

The Mutex Unlock node releases the last lock placed by a Mutex Lock node. It has no parameters - it automatically releases the most recent lock acquired in the flow.

9. Try Catch Node

Purpose & Use Cases

The Try Catch node catches errors that occur in downstream nodes. When an error occurs on the Try route, nodes linked to the Catch port are executed instead.

When no nodes are linked to the Catch port, the error is thrown and stops flow execution. This also occurs if you had a prior Try Catch node and want to clear it so errors stop execution.

Node Outputs

Output Description
Try (port) Normal execution path - link to nodes that may throw errors
Catch (port) Error handling path - link to error handling nodes
$state.lastError State property containing the caught error when in the catch branch execution
Best Practice: Never loop back into the flow from the Catch port. This can create an infinite loop inside the Data Flow if you don't carry an iterator.

10. Throw Error Node

Purpose & Use Cases

The Throw Error node throws a ThrowBindingError which halts execution. The error uses the provided Message, and the data includes an object with an 'info' property holding the Error Info parameter value.

Example: A Data Flow integrates a Purchase Order with an external system. When creation fails, the error needs to be rethrown with custom formatting after being caught by a Try Catch node.

Configuration Parameters

Parameter Description
Message String or transform for dynamic message. E.g., "PO " & $state.context.purchaseOrder.number & " failed creation"
Error Info Transformable field for any valid JSON pushed to the error's info property

Example Output

Message Transform: "PO " & $state.context.purchaseOrder.number & " failed creation"

Error Info Transform: {"purchaseOrder": $state.context.purchaseOrder}

Resulting Error:

{
"message": "PO 145 failed creation",
"info": {
"purchaseOrder": {
"number": "145",
"supplier": {"name": "Acme Industries"}
}
}
}

11. Best Practices

Error Handling Patterns

Pattern 1: Log and Continue

Try Catch → (Catch) Log Node → Continue Processing

Pattern 2: Rethrow with Context

Try Catch → (Catch) Throw Error (formatted message with $state.lastError)

Pattern 3: Notify on Failure

Try Catch → (Catch) Send Notification → Log Error

Concurrency Patterns

Pattern: Prevent Overlapping Scheduled Executions

Schedule → Mutex Lock (hard-coded Resource ID) → Process → Mutex Unlock

Pattern: Resource-Level Locking

Mutex Lock (Resource ID = $.orderId) → Update Order → Mutex Unlock

Performance Guidelines

  • Remove Broadcast: Broadcast is deprecated - redesign flows using Script Node batching
  • Use Fork for Parallelism: Query multiple systems simultaneously, then Combine results
  • Set Reasonable Timeouts: Combine and lock timeouts should account for expected processing time
  • Always Unlock: Ensure Mutex Unlock is on ALL terminating paths, including error paths

12. Troubleshooting

Symptom Likely Cause Resolution
Combine node times out One Fork branch failed or takes too long Add Try Catch to each branch; increase timeout
Collect node never emits Batch Count set but Batch Time not configured Always set both Batch Count and Batch Time
Mutex lock persists between executions Mutex Unlock missing on error path Add Mutex Unlock to ALL terminating paths including Catch
Infinite loop in error handler Catch port loops back into flow without iterator Never loop from Catch; terminate or log and exit
Broadcast flow performs poorly Broadcast is deprecated and scales poorly Remove Broadcast node; redesign using Script Node batching
$state.lastError is empty in Catch Error occurred before Try Catch was set Place Try Catch before nodes that may error
  • Source & Trigger Nodes Complete Guide - Initiating flows and trigger mechanisms
  • Transform Nodes Complete Guide - Data format conversion and array manipulation
  • Script Nodes Complete Guide - JSONata and JavaScript custom logic
  • Debugging Nodes Complete Guide - Echo, Log, and diagnostic tools
  • Data Flow Design Standards - Best practices for flow architecture
  • Platform Website: fuuz.com

14. Revision History

Version Date Author Description
1.0 2025-01-01 Craig Scott Initial release - Complete guide covering all 10 flow control nodes with configuration parameters, examples, best practices, and troubleshooting.
    • Related Articles

    • Data Flow Nodes Reference

      Fuuz Data Flow Nodes - Complete Reference Article Type: Reference Audience: Developers, App Admins, Solution Architects Module: Data Flows / Data Ops Applies to Versions: All 1. Overview The Fuuz Industrial Operations Platform provides a ...
    • Data Flow Design Standards

      Article Type: Standard / Reference Audience: Solution Architects, Application Designers, Developers Module: Fuuz Platform - Data Flow Designer Applies to Versions: 2025.12+ 1. Overview Data Flow Design Standards define the mandatory requirements and ...
    • Debugging & Context Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Debugging & Context Nodes? Debugging ...
    • Transform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: None - This guide assumes no prior knowledge 1. Overview What are Transform Nodes? Transform nodes ...
    • Integration Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Integration Applies to Versions: Platform 3.0+ Prerequisites: Understanding of Data Flow concepts, Connection setup 1. Overview What are Integration Nodes? Integration ...