Data Flow Design Standards

Data Flow Design Standards

Article Type: Standard / Reference
Audience: Solution Architects, Application Designers, Developers
Module: Fuuz Platform - Data Flow Designer
Applies to Versions: 2025.12+

1. Overview

Data Flow Design Standards define the mandatory requirements and best practices for deploying Data Flows in the Fuuz Platform. These standards ensure consistency across flow designs, provide inline documentation for future reference, increase efficiency of support by others, and track all changes for visibility. All standards must be completed prior to deploying a Data Flow to production environments.

Depending on the platform version, most of these validations may be automatically checked and prevent deployments. Violations of standards and requirements are acceptable while working on flows during development, but all violations must be resolved before requesting a production deployment.

Purpose

These standards are established to:

  • Improve the consistency of flow designs across the organization
  • Provide inline documentation for future reference and knowledge transfer
  • Increase the efficiency of support by others including Fuuz support teams
  • Track all changes for visibility and audit compliance
  • Ensure deployment readiness and production stability
  • Enable effective troubleshooting through meaningful error log messages
Important: Violations of standards are acceptable while working on flows during development and testing. Always resolve all violations before requesting a production deployment. Automated validation checks in newer versions may prevent deployment if violations exist.

2. Architecture & Data Flow

Data Flow Structure

Data Flows in Fuuz consist of interconnected nodes that process data, integrate with external systems, and orchestrate business logic. Each flow follows a standard architecture pattern:

Event Trigger (Request/Data Change/Schedule)

Conditional Filter (minimize executions)

Context Setup (label all data)

Business Logic Processing

Integration Calls (if needed)

Response/Completion

Flow Categories

  • Integration Flows: Bidirectional data synchronization between Fuuz and external systems (ERP, MES, etc.)
  • Event-Driven Flows: Respond to data change events, Request-Response triggers, or scheduled events
  • Business Logic Flows: Internal Fuuz data transformations and orchestration
  • IIoT Flows: Device gateway communication for reading/writing PLC tags and sensor data
  • Notification Flows: Alert generation and communication workflows

Key Principles

  • Explicit Over Implicit: All nodes must have meaningful names and descriptions—no default naming allowed
  • Context Hygiene: Maintain clean context with labeled variables; prune unnecessary data to control memory usage
  • Error Handling: Every Request flow must have at least one Response node
  • Performance Awareness: Minimize broadcasts and forks; filter data changes early to reduce executions
  • Documentation First: Flows, versions, and nodes must be documented before deployment

3. Use Cases

  • ERP Integration: Bidirectional synchronization of products, customers, locations, work orders, and inventory between Fuuz and NetSuite, Plex, SAP, or other ERP systems
  • Real-Time Data Change Processing: Automatically propagate Fuuz data changes to external systems using data change event triggers with immediate conditional filtering
  • Scheduled Data Synchronization: Daily, hourly, or custom schedule-based imports/exports of master and transactional data
  • Work Order Automation: Prepopulate work order process records from default product strategy processes when work orders are created
  • IIoT Data Collection: Read multiple PLC tags simultaneously from device gateways for production monitoring and control
  • Notification Workflows: Generate and route alerts based on production events, quality issues, or business rule violations
  • Data Transformation: Complex JSONata transformations for mapping external data structures to Fuuz data models
  • Pagination Handling: Process large datasets using topic publishing/subscribing patterns for efficient memory management
  • API Orchestration: Coordinate multiple API calls with proper error handling and retry logic
  • Badge In/Out Workflows: Time tracking and labor management flows responding to operator badge events

4. Screen Details

Naming Standards

Properly naming the flow is vital. Use the following format:

{#1 Primary Fuuz Object} {#2 direction or verb(s)} {#3 External Platform} {#4 Name of Primary External Object} {#5 Event Type}

Naming Components Explained

Component Description Examples
#1 Primary Fuuz Object The primary Fuuz data model being pulled from or pushed into Locations, Facilities, WorkOrder, Products, Customers
#2 Direction or Verb From, To, or action verbs From, To, Badge In/Out, Workcenter Mode
#3 External Platform External system name; include (API) if using API vs native connector NetSuite, Plex, Plex (API), QuickBooks, RedZone
#4 External Object Name Only required if external object name differs from Fuuz object Bins, Location, Ledger, Processes
#5 Event Type Currently only required for data change events on Data Change

Naming Examples

  • Locations From NetSuite Bins - Standard integration sending locations to Fuuz from NetSuite's list of Bins
  • Locations To NetSuite Bins on Data Change - Standard integration sending locations to NetSuite's Bins from Fuuz when updates are captured with data change events
  • Facilities from NetSuite Locations - Integration from NetSuite highlighting that Fuuz objects do not line up with NetSuite objects
  • WorkOrder Process Defaults From Fuuz Default Product Strategy Processes on Data Change - Flow prepopulates WorkOrder Process records with a copy from default Product Strategy Process tables on data change event (specifically the add event)
Important Distinction: Plex and Plex (API) are very different sources. Always specify whether using native connector or API integration. Renaming a flow doesn't change the flow's ID. If you need to rename a flow, you must export the contents into an empty flow with the new name.

Documentation Standards

Flow Description

Every flow must have a meaningful description including:

  • What the flow does (overall purpose)
  • Which connections are used
  • Overall summary of the workflow

Version Description

Every version must have a description of the changes being made. When making additional changes after setting the initial description, update it to reflect all changes in that version.

Node Description

Every node must have a meaningful description.

Critical Rule: There are NO exceptions to the node description requirement. Every single node in every flow must be documented with a meaningful description.

Walkthrough

Every flow must have a walkthrough. Every walkthrough must include all nodes in a logical order. The walkthrough is used during the review and deployment process.

Important: The configured walkthrough is stored directly inside the flow. Save the flow after configuring the walkthrough to ensure the walkthrough persists.

5. Technical Details

General Standards

  • All nodes must have a unique and meaningful name (error log messages must be useful for troubleshooting)
  • Nodes cannot use the default name even if that name is unique
  • All nodes with a connection on the left side must be connected to another node
  • No orphaned or disconnected nodes allowed in deployed flows

Debug Nodes

  • Descriptions are important; use them to highlight specific test cases
  • Test cases are required to prove the testing prior to deployment
  • Document expected values and validation criteria in debug node descriptions
  • Remove or disable debug nodes before production deployment if they impact performance

Conditionals

  • Be descriptive in the conditional expressions
  • Avoid double negatives (example: $not(false)) in conditional logic
  • Use If/Else in most situations, even if the else condition goes nowhere
  • Exception: If the If/Else recommendation results in a double negative to continue on a true path, consider using the else path instead

Context Management

Labeling Requirements

Label everything in context with a wrapping variable:

  • Never set or merge context with a transform that is just "$"
  • Use "{plexResponse:$}" instead of "$"
  • Always wrap response data with meaningful variable names
// WRONG - No context labeling
$

// CORRECT - Properly labeled context
{
"plexResponse": $,
"timestamp": $now(),
"processedCount": $count(items)
}

Context Hygiene

  • If the context has unnecessary data, prune it prior to storage
  • If the context is large and no longer needed, remove it
  • Minimize context size before forks and broadcasts
Warning: Context is shared with each node downstream. Forks and broadcasts will significantly increase memory usage by a factor equal to the number of concurrent transactions. It is very important to keep a clean context with only what you need during these situations.

Flow Control

Broadcasts are dangerous. In most cases, there is another way to solve the problem. Due to the nesting limitations of a flow, broadcasting quickly consumes resources and will cause errors. Avoid broadcasts whenever possible and use alternative patterns like topic publishing/subscribing.
  • Use topic publishing and subscribing instead of broadcasts for pagination
  • Minimize use of fork nodes when possible
  • Consider sequential processing over parallel when memory is a concern
  • Document any necessary broadcasts with justification

Events

Request-Response Pattern

All flows with a Request node must have at least one Response node. This ensures proper completion of request-response cycles and prevents hung requests.

Data Change Events

Data Changes can trigger a lot of flows. They should be followed immediately by a conditional statement to reduce executions and filter to only relevant events (add, update, delete).

Data Change Event Node

Conditional: Check event type and specific field changes
↓ (true path only)
Continue processing

Topic Pattern

Topics, publishing, and subscribing are the preferred methods of handling pagination for large data sets. This pattern prevents memory exhaustion and supports efficient batch processing.

Integration (USE INTEGRATION TENANTS ONLY)

Node Naming

The name of the integration node must indicate what it is calling:

  • Example: "Calling Plex DS - POs_Get (key:12345)"
  • Include system name, data source/endpoint, and key identifiers

Node Description

The description of the integration node must include the parameters and a description of the intent:

  • Example: "Pulling POs with status of Open"
  • Document filter criteria, expected response format, error handling approach

Integration Best Practices

  • Use of $integrate() is highly discouraged in transforms—it will most likely be deprecated
  • The intent is to make all outbound calls pink in color for easier reference
  • It is highly recommended that a native node is used over the HTTP node
  • Example: Avoid using HTTP to call Salesforce unless the Salesforce node does not meet the need
  • Native connector nodes provide better error handling, retry logic, and debugging visibility

IIoT (USE INTEGRATION TENANTS ONLY)

Best Practice: Communication with the Device Gateway can be time consuming. It is a Best Practice to verify that tag reads (triggered by a flow) read multiple tags at once rather than making individual requests for each tag. Batch tag reads significantly improve performance and reduce gateway load.
  • Group related tag reads into single requests
  • Minimize polling frequency for non-critical data
  • Use device gateway subscriptions for real-time data when available
  • Document tag addresses and data types in node descriptions

6. Resources

  • Fuuz Industrial Operations Platform
  • Data Flow Designer Documentation
  • Data Flow Node Reference
  • JSONata Expression Language Guide
  • Integration Connector Documentation
  • Device Gateway Configuration
  • Data Flow Deployment Process

7. Troubleshooting

  • Issue: Deployment blocked due to naming violations • Cause: Nodes using default names or non-unique names • Fix: Review all nodes in flow; rename any nodes with default names to meaningful descriptive names; verify all node names are unique within the flow
  • Issue: Deployment blocked due to missing descriptions • Cause: Nodes without descriptions; flow without description; version without change description • Fix: Add meaningful descriptions to every node explaining what it does; add flow description with purpose and connections used; add version description detailing changes made
  • Issue: Flow consuming excessive memory or crashing • Cause: Context bloat with unnecessary data; broadcasts multiplying context across concurrent executions • Fix: Add context pruning nodes before forks/broadcasts; remove unnecessary data from context; use topic publishing instead of broadcasts; minimize data stored in context to only what's needed
  • Issue: Error logs not helpful for troubleshooting • Cause: Generic or default node names • Fix: Rename all nodes with specific meaningful names that indicate their purpose; include key identifiers in node names (system name, endpoint, action)
  • Issue: Data change flow executing too frequently • Cause: No conditional filter immediately after data change event • Fix: Add conditional node immediately after data change event node; filter for specific event types (add/update/delete); filter for specific field changes; document filtering logic in conditional description
  • Issue: Request flow not completing properly • Cause: Missing response node • Fix: Ensure every Request node flow has at least one Response node; verify all execution paths lead to a response; add error handling responses for failure scenarios
  • Issue: Context variables getting overwritten or lost • Cause: Using "$" instead of labeled wrapper variables • Fix: Always wrap context data with meaningful variable names like {plexResponse:$}; never set context to just "$"; use descriptive variable names that indicate data source and purpose
  • Issue: Integration calls failing with unclear errors • Cause: Using $integrate() in transforms or generic HTTP nodes • Fix: Replace $integrate() calls with dedicated integration nodes; use native connector nodes instead of HTTP when available; document integration parameters and expected responses in node descriptions
  • Issue: Device gateway timeouts or slow performance • Cause: Reading tags individually instead of batching • Fix: Consolidate multiple tag reads into single batch requests; group related tags together; reduce polling frequency for non-critical tags; use subscriptions for real-time data
  • Issue: Unable to find flow or understand its purpose • Cause: Poor flow naming or missing walkthrough • Fix: Rename flow following standard format: {Fuuz Object} {direction/verb} {External Platform} {External Object} {Event Type}; create comprehensive walkthrough including all nodes in logical order; save flow after configuring walkthrough
  • Issue: Flow walkthrough not persisting • Cause: Flow not saved after configuring walkthrough • Fix: Always save the flow after configuring or modifying the walkthrough; walkthrough configuration is stored directly in the flow JSON
  • Issue: Double negative logic making conditionals confusing • Cause: Using If/Else when else path should be the primary logic • Fix: If If/Else creates double negative ($not(false)), flip the logic and use the else path as the primary path instead; simplify conditional expressions to positive logic whenever possible

8. Revision History


Version Date Editor Description
1.0 2024-12-31 Craig Scott Initial Release - Data Flow Design Standards
    • Related Articles

    • Data Flow Nodes Reference

      Fuuz Data Flow Nodes - Complete Reference Article Type: Reference Audience: Developers, App Admins, Solution Architects Module: Data Flows / Data Ops Applies to Versions: All 1. Overview The Fuuz Industrial Operations Platform provides a ...
    • Gateway System Requirements

      Fuuz Gateway System Requirements & Deployment Best Practices Article Type: Concept / How-To Audience: Solution Architects, OT/IT Engineers, Administrators Module: Fuuz Edge Gateway Applies to Versions: 2025.12+ Overview The Fuuz Gateway acts as a ...
    • Flow Control Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Flow Control Nodes? Flow Control nodes ...
    • Fuuz Platform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with GraphQL 1. Overview What are Fuuz Platform ...
    • Debugging & Context Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Debugging & Context Nodes? Debugging ...