Data Flow Nodes Reference

Data Flow Nodes Reference

Fuuz Data Flow Nodes - Complete Reference

Article Type: Reference
Audience: Developers, App Admins, Solution Architects
Module: Data Flows / Data Ops
Applies to Versions: All

1. Overview

The Fuuz Industrial Operations Platform provides a comprehensive visual flow designer with 50+ specialized nodes enabling sophisticated business logic, data transformations, integrations, and device interactions without extensive coding. These nodes build Data Flows (backend processing), Web Flows (screen-side UI logic), and Edge Flows (gateway-based device interactions).

Nodes reduce manual pro-coding through visual drag-and-drop configuration. Fuuz engineering uses these same nodes for commercial solutions, ensuring ultimate flexibility. Primary languages are JSONata (data transformations) and JavaScript (complex logic), with external integration via Open API, AWS Lambda, Python notebooks, and HTTP connectors.

Data Ops Paradigm: Fuuz implements complete iPaaS for ETL and data contextualization on sedentary data (databases, files) and data in motion (real-time edge streams, API calls, events).

Key Capabilities

  • Enterprise Integrations: Pre-built connectors for SAP, Oracle, SuccessFactors, ADP, AWS, Google Drive, OpenAI, and more
  • Business Logic & ETL: Transform, filter, group, aggregate, validate, and process data across databases and real-time streams
  • Device Integration (IIoT): Connect to PLCs, weigh scales, label printers, RFID portals, and industrial controllers via Fuuz gateway
  • UI Control & Validation: Complex screen workflows including CFR Part 11 signoffs, batch release, in-process inspections
  • AI Integration (MCP): Enable any flow as an AI tool via Model Context Protocol with enterprise security
  • External Code Integration: Connect to Python notebooks, AWS Lambda, Google Colab, local Docker containers

2. Architecture & Flow Types

Flow Execution Contexts

Fuuz supports three distinct execution contexts:

Flow Type Execution Context Categories Primary Use Cases
Backend Flows Server-side/Cloud System, Integration, Document Data processing, API integration, scheduled jobs, document generation
Edge Flows Edge Gateway (customer device) IIoT, Device Integration Real-time device data, PLC communication, printer control
Web Flows Screen-side (browser) UI Logic, Validation Complex HMI workflows, form validation, CFR Part 11 signoffs

Trigger Mechanisms

Flow execution is triggered by nodes included in the flow. A single flow can contain multiple triggers:

  • Request-Response: Synchronous API invocation (does NOT fire on import)
  • Event-Driven: Data Change Source, Topic subscriptions (DOES fire on import)
  • Scheduled: Cron-based timing via Schedule Node or Data Flow Scheduler
  • HTTP Webhook: External system triggers
  • File-Based: Gateway folder monitoring
  • MCP (AI Tools): Enabled via checkbox in Data Flows list with enterprise security
Critical Distinction: Request-Response triggers do NOT fire during data import. Data Change Source subscriptions DO fire on imports. Use Data Change Source for automated processing regardless of entry method.

External Code Integration

Fuuz flows can integrate with external code via HTTP connectors:

  • AWS Lambda: Deploy Python, Node.js, or other serverless functions
  • Python Notebooks: Connect to Jupyter, Google Colab via HTTP API
  • Local Containers: Edge flows can integrate with Docker containers on same network
  • Custom HTTP Services: Any REST API endpoint
Concurrent Execution: degreeOfParallelism setting controls concurrent executions. returnErrors includes external system errors in flow output.

3. Node Designer Interface

Nodes are located in the Toolbox, grouped by function. Each node shares common structural elements:

Component Location Function
Input Port Left side Receives payload from previous node
Output Port Right side Sends processed data to next node(s)
Node Name Header User-editable label (defaults to node type)
Node Type Subtitle Read-only identifier of functionality
Grab Handle Appears on hover Drag to reposition node

Workflow: Drag nodes from Toolbox → Connect ports → Configure properties → Test with Source/Log nodes. Use Ctrl+S to save, Ctrl+C/V to copy/paste. Node Editor panel is resizable for better workspace management.

4. Comprehensive Node Reference

This section documents all node types organized by category. Each entry includes purpose, configuration, flow type compatibility, and use cases.

4.1 Source & Trigger Nodes

These nodes initiate flow execution and provide initial payload for processing.

Node Purpose & Configuration Use Cases
Source Manual trigger for testing. Click "Execute" to run flow with test data. No configuration required. Development, debugging, proof-of-concept
Request Synchronous request-response trigger. Must pair with Response node. Does NOT fire on import. API endpoints, screen button actions, calculated fields
Response Returns processed data to caller. Terminates flow. Optional output transform. Completing API flows, returning screen action results
Webhook Receives HTTP POST from external systems. Generates unique URL for incoming webhooks. Third-party integrations, payment callbacks, event notifications
Data Change Source Subscribes to database changes. DOES fire on import. Select collection and operation type (insert, update, delete). Automated processing, audit trails, downstream notifications
File Source Triggers when files appear in monitored folders. Configure via gateway folder monitoring. File imports, batch processing, automated uploads
Device Subscription Subscribes to device data streams. Select device and data tags. Edge flows only. Real-time PLC monitoring, sensor alerts, production tracking
Topic Subscribes to pub/sub messaging topics. Specify topic name. Event-driven architecture, microservices communication

4.2 Transform Nodes

Transform nodes convert data between formats and manipulate arrays/objects. All support Input/Output transforms via Advanced Configuration.

Node Purpose & Configuration Key Options
JSON to CSV Converts JSON array to comma-separated values string. Field delimiter, field wrap, end of line, prependHeader, keys, expandArrayObjects, unwindArrays
CSV to JSON Parses CSV string into JSON array of objects. Field delimiter, quote character, escape character, headers, skipEmptyLines
JSON to XML Converts JSON to XML format string. Root element name, attribute handling, CDATA support
XML to JSON Parses XML string into JSON object. Attribute prefix, text node name, preserve ordering
Unique Array Removes duplicate items from array based on specified field. Unique field selector (JSONata), preserve first or last occurrence
Filter Array Filters array items based on condition expression. Filter expression (JSONata), returns items where condition is true
Group Array Groups array items by specified field(s). Group by field(s), aggregation functions (sum, count, avg, min, max)
Transform Terminology: Input Transform reformats payload/context before node execution. Output Transform runs after processing, with access to original input via $$ binding.

4.3 Flow Control Nodes

Control flow execution paths, parallel processing, timing, and sub-flow invocation.

  • Broadcast: Sends same payload to multiple output paths simultaneously
  • Fork: Conditional branching based on expression evaluation
  • Combine: Merges multiple input paths into single output, waits for all inputs
  • Collect: Collects array items into batches with size and optional timeout
  • Schedule: Time-based trigger using cron expressions (hourly, daily, weekly, custom)
  • Delay: Pauses execution for specified duration (milliseconds, seconds, minutes)
  • Mutex Lock/Unlock: Prevents concurrent execution with named locks
  • Queue: Places message in RabbitMQ queue
  • Execute Flow: Invokes another data flow as sub-flow, returns result
  • Try/Catch: Error handling wrapper with Try path and Catch path for errors

4.4 Script Nodes

Execute custom logic using JSONata or JavaScript:

  • JSONata Script: Lightweight data transformation. Access payload via $, context via $$. Supports custom Fuuz functions library.
  • JavaScript: Full execution environment. Access msg.payload and context. Use for complex logic, loops, async operations.
  • Saved Script: References saved/reusable script. Update once affects all flows. Select script and provide variables.

4.5 Fuuz System Nodes

Core platform operations:

  • Query: Execute GraphQL query against MongoDB. Select collection, define query fields, filtering, sorting, pagination.
  • Mutate: Create, update, or delete documents. Select collection and operation.
  • Aggregate: Run MongoDB aggregation pipeline for complex data analysis.
  • Saved Query: Execute saved/reusable query with variables.
  • Render Document: Generate PDF/document from Stimulsoft template. Returns base64-encoded document.
  • System Email: Send email via platform SMTP. Configure recipients, subject, body, attachments.
  • Get Calendar: Retrieve calendar/schedule data for specified date range.
  • Data Mapping: Execute saved data mapping with input/output schema validation.
No Specialized Knowledge Required: Query and Mutate nodes provide visual interfaces. Developers don't need GraphQL, MongoDB, React, RabbitMQ, Stimulsoft, or Oracle expertise.

4.6 Integration Nodes (iPaaS)

Pre-built connectors for enterprise systems:

  • SAP: BAPI/RFC calls, IDoc processing, material master, production orders
  • SuccessFactors: Employee data, time/attendance, recruiting, performance management
  • ADP / ADP Vista: Payroll data, labor distribution, employee records
  • Amazon/AWS: S3 storage, Lambda functions, SQS/SNS messaging, DynamoDB
  • AWS Lambda: Invoke Lambda functions with payload, async execution (for Python ML models, custom algorithms)
  • Google Drive API: File upload/download, folder management, sharing permissions
  • OpenAI Chat: GPT models, embeddings, completions, function calling
  • UPS / TecCom: Shipping labels, tracking, rate quotes, address validation
  • HTTP Connector: Generic REST API calls (GET, POST, PUT, DELETE) - connect to Python notebooks, Colab, custom services
returnErrors Configuration: Enable returnErrors on integration nodes to include external system error details in output rather than failing silently.

4.7 IIoT & Gateway Nodes

Edge-side device integration (Edge Flows only):

  • Print File: Prints files via gateway printer. Configure printer device, file, function timeout (default 60s).
  • Print Data: Prints PDF files via gateway printer. Requires base64-encoded document content.
  • Execute Device Function: Generic node for device functions. Select device (filters available functions), then function (shows parameter inputs). Dynamic inputs based on selected function.

4.8 Additional Node Categories

Context Nodes: Set Context, Merge Context, Remove From Context - manage flow-level state

Validation Nodes: Validate node enforces data quality and business rules (critical for CFR Part 11 compliance)

Debugging Nodes: Log (write to platform logs), Echo (display in designer console), Preview File (download generated files)

Notification Nodes: Send Notification (publish to channel), Receive Notification (trigger on receipt)

5. Use Cases & Examples

ETL & Data Processing

  • Import CSV to Database: File Source → CSV to JSON → Validate → Transform → Mutate (batch insert)
  • Daily SAP Sync: Schedule (daily 2 AM) → Query (local orders) → Transform → SAP Connector → Log results → System Email (errors)
  • Real-time Aggregation: Data Change Source (production events) → Aggregate (hourly stats) → Mutate (update dashboard) → Send Notification

Business Logic Workflows

  • CFR Part 11 E-Signature: Request → Validate (credentials) → Fork (approval levels) → Mutate (sign record) → Render Document (PDF) → Response
  • Batch Release Workflow: Request → Query (batch tests) → JavaScript (calculations) → Fork (pass/fail) → Mutate (update status) → Execute Flow (notify QA) → Response
  • Multi-System Order Processing: Webhook → Validate → Broadcast → [Mutate DB | SAP order | UPS shipping] → Combine → Response

Device Integration (IIoT/Edge)

  • PLC to Cloud Sync: Device Subscription (PLC tags) → Transform (normalize) → Mutate (cloud database) → Fork (thresholds) → Send Notification
  • Automated Label Printing: Topic (production event) → Query (product data) → Render Document (label) → Print Data (Zebra printer)
  • Python Notebook Integration: Device Subscription → Collect (batch) → HTTP Connector (local Jupyter API) → Transform (results) → Mutate

AI-Powered Workflows (MCP)

  • AI Production Assistant: Enable flow as MCP tool. Claude invokes: Request → Query (production data) → Transform → OpenAI Chat (analysis) → Response
  • Intelligent Routing: Request → OpenAI Chat (classify intent) → Fork (routing logic) → Execute Flow (department-specific) → Response

6. Troubleshooting

Issue Cause Resolution
Flow not triggering on data import Using Request/Response trigger Use Data Change Source node which DOES fire on import
Integration node failing silently returnErrors not enabled Enable returnErrors config, add Log node after integration
Transform unexpected results JSONata syntax error or incorrect data access Add Echo node to inspect payload, use $ for payload, $$ for context
Data Mapping validation errors Input or output doesn't match schema Use Try/Catch around Data Mapping node, check both schemas
Concurrent execution issues Multiple instances running simultaneously Adjust degreeOfParallelism OR use Mutex Lock/Unlock
Gateway device timeout Default 60s timeout exceeded Increase Function Timeout parameter on IIoT nodes
Debug Configuration: Enable breakpointEnabled, executionMetrics, executionStarted/Succeeded/Failed logging, and enableTraceLogging on nodes during development.

7. Best Practices

Flow Design & Organization

  • Name nodes descriptively reflecting business function (e.g., "Validate Order Data" vs "Validate Node 1")
  • Organize flows left-to-right following data flow, group related nodes vertically
  • Use Execute Flow nodes to create reusable sub-flows for common operations
  • Wrap critical operations in Try/Catch, add Log nodes after integrations, Send Notification on failures
  • Document logic with node names and Log messages

Performance & Scalability

  • Use Collect node to batch array items before processing (reduces database calls)
  • Configure degreeOfParallelism appropriately for high-volume flows
  • Use Remove From Context to free memory in long-running flows
  • Filter data early in flow, use Query node projections to limit returned fields
  • Perform transforms as late as possible to minimize data movement

Security & Compliance

  • Ensure proper Access Types (Developer) and Roles for flow visibility. Remember: Access Types ≠ Roles
  • Access Types and Roles do NOT auto-update across tenants or environments (Build, QA, Production)
  • Store integration credentials securely, never hardcode in flows
  • Use Validate nodes for all user input
  • Use Data Change Source with logging for compliance audit trails
Golden Rule: Always use Data Change Source (not Request/Response) for automated processing that must trigger regardless of data entry method (manual input, import, API).

8. Technical Reference

Platform Architecture

Component Details
Database MongoDB with proprietary ORM layer, accessed via GraphQL API
Gateway Drivers for PLCs, databases, printers, folder monitoring. Edge flow execution environment.
Messaging RabbitMQ for pub/sub topics, Queue nodes, event-driven architecture
Document Engine Stimulsoft for PDF/report generation, custom templates
UI Framework React-based with FusionCharts, Font Awesome. Web flows execute client-side.

Languages & Libraries

  • JSONata: Primary transformation language with custom Fuuz bindings. Use in all transform fields, screens, web flows.
  • JavaScript: Complex logic, loops, async operations in JavaScript nodes and web flows.
  • GraphQL: Abstracted by Query/Mutate nodes. Developers don't need specialized knowledge.
  • External Languages: Python and others via Open API, AWS Lambda, HTTP connectors, notebooks (Jupyter, Colab), Docker containers

Data Flow Object Structure

A Data Flow is the object containing nodes. Data Ops is the paradigm describing ETL and data contextualization on sedentary data (databases, files) and data in motion (real-time edge streams, API calls, events).

Common Node Configurations

  • breakpointEnabled: Pause execution at node for debugging
  • executionMetrics: Track execution time and performance
  • executionStarted/Succeeded/Failed: Log execution lifecycle events
  • enableTraceLogging: Detailed execution trace for troubleshooting
  • degreeOfParallelism: Number of concurrent flow executions supported
  • returnErrors: Include external system errors in output vs failing silently (integration nodes)
No Specialized Knowledge Required: App designers don't need specialized knowledge of GraphQL, MongoDB, React, RabbitMQ, Stimulsoft, FusionCharts, Font Awesome, Oracle, or other underlying libraries to build applications.
  • Fuuz Custom JSONata Library - Custom functions and bindings specific to Fuuz platform
  • Data Mapping Designer - Visual field-to-field transformation with schema validation
  • Data Flow Scheduler - Separate admin tool for complex scheduling requirements
  • Screen Designer - Visual HMI/UI designer with JSONata for reactive components
  • Document Designer -  template designer for PDF/report generation
  • Platform Website: fuuz.com
    • Related Articles

    • Data Flow Design Standards

      Article Type: Standard / Reference Audience: Solution Architects, Application Designers, Developers Module: Fuuz Platform - Data Flow Designer Applies to Versions: 2025.12+ 1. Overview Data Flow Design Standards define the mandatory requirements and ...
    • Flow Control Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Flow Control Nodes? Flow Control nodes ...
    • Transform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: None - This guide assumes no prior knowledge 1. Overview What are Transform Nodes? Transform nodes ...
    • Debugging & Context Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Debugging & Context Nodes? Debugging ...
    • Script & Validation Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with JSONata or JavaScript 1. Overview What are ...