Debugging & Context Nodes

Debugging & Context Nodes

Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: Basic understanding of Data Flow concepts

1. Overview

What are Debugging & Context Nodes?

Debugging nodes help you test, troubleshoot, and monitor your data flows during development and production. Context nodes manage shared state that persists throughout a flow execution, allowing you to reference and accumulate data as the flow progresses.

Why are they important?

Enterprise integrations require robust testing capabilities to simulate various input scenarios without connecting to live systems. Context management is essential for flows that process hierarchical data (like orders with line items) or need to accumulate information across multiple operations. Proper logging ensures production issues can be diagnosed quickly.

Node Summary

Category Node Purpose
Debugging Source Paste test payload/context for flow testing
File Source Attach local file to simulate file-based input
Data Change Source Simulate data change events for testing
Preview File Download payload as file during design (PDFs, CSVs)
Log Write messages to Data Flow Logs for monitoring
Echo Pass-through for visual flow organization
Transition (DEPRECATED) Pipe last execution payload (use Debug Payloads instead)
Context Set Context Replace entire context with new values
Merge Context Deep merge new values into existing context
Remove From Context Remove specific properties from context

2. Source Node

The Source node allows you to paste a payload from anywhere directly into the node's editor panel and then consume it in your flow for testing purposes. In production, the contents of this node's payload would be replaced by some other listener, trigger, or API call.

Configuration Parameters

Parameter Description
Payload The payload to pass to the next node
Context The context to pass to the next node (simulates context mid-flow)
Important: Context set in the Source node will not persist in the deployed version of the Data Flow. If you need to store items in context during execution, you must use the Set Context or Merge Context nodes.

3. File Source Node

The File Source node provides the option to attach a file from your local system to simulate receiving data from a third party. This is useful when testing a flow that would normally be called through a Request/Response or would pull a file from an FTP you don't currently have access to.

Another useful feature is the ability to point it to a file directory, allowing you to simulate an integration that will later use some version of an FTP file folder setup.

Configuration Parameters

Parameter Description
File Drop a file or select from local file explorer
File Directory Simulate the directory where this file would be stored
Output Encoding UTF8 or Base64

Supported File Encodings

Encoding Example File Extensions
UTF8 XML, JSON, CSV, XLS (text documents)
Base64 jpg, png, PDF (binary formats)

4. Data Change Source Node

Fuuz is an event-driven architecture that allows you to listen on event streams for many different types. The Data Change Source node allows you to simulate data change events for testing, as the data change event stream is not started until you deploy the flow.

Data change events are emitted when a record in a data model is created, updated, or deleted. This allows you to perform real-time integration or business logic based on underlying data changes. The data models can include any system tables or custom tables you define.

Configuration Parameters

Parameter Description
API Application or System
Type The Data Model Type (Data Flow, Units, etc.)
Operation Create, Update, or Delete
Before Data The record prior to the change
After Data The record after the change
Design Standard: Data Changes can trigger many flow executions. They should be followed immediately by a conditional statement to reduce unnecessary executions.

5. Preview File Node

The Preview File node allows you to download the payload input as a file. This is useful when creating CSV reports or Document Designer PDFs when you need to see the results during development. This node has no function when the data flow is deployed - it only works in the designer.

Configuration Parameters

Parameter Description
File Content Transform that points to where in your payload/context the data is located
File Name String or transform to dynamically name the file
File Encoding UTF8 (text) or Base64 (images, PDFs)

File Content Transform Examples

JSON file: Use $jsonStringify(content) to convert the object to a string first.

PDF document: If the PDF was not returned Base64 encoded, call $base64encode(content) on the content string.

6. Log Node

The Log node allows you to log data within a flow when it is triggered. Logs are not created when the flow is run in the designer - they are only generated in deployed flows. The resulting logs are viewable from the Data Flow Logs grid.

Configuration Parameters

Parameter Description
Message Transform Dynamic message (e.g., "PO " & $state.context.poNumber & " failed import")
Level Log importance level (Trace, Debug, Info, Warn, Error, Fatal)
Include Context Include execution context in log (default: false)
Include Payload Include execution payload in log (default: false)
Warning: If the context or payload includes any JSON objects that start with $, they will not be logged. It is strongly recommended to change the default message transform and never directly log the input data in the message.

Log Levels

Level Sort Description
Trace 0 Exact details of program state
Debug 1 Diagnostic information about specific behavior
Info 2 Normal, significant behavior (default capture level)
Warn 3 Unusual behavior outside of errors
Error 4 Important procedural failure
Fatal 5 Shutdown due to severe problems
Log Level Configuration: Data Flows have a logLevelId setting (default: "Info") that defines which logs are captured. Example: Setting "Info" captures only Info, Warn, Error, and Fatal logs - ignoring Debug and Trace.

7. Echo Node

The Echo node is generally used for visual aesthetics and has no real functionality outside of passing input payload/context to output payload/context unchanged. When deployed, this node will continue to pass input to output.

This can be useful when using a Fork route that passes directly through, where you want something to visually appear in the middle of the flow for clarity.

8. Transition Node (DEPRECATED)

The Transition node often accompanied the Source Node at the start of a flow. By incorporating this node, you could pipe in the payload of the last flow execution as an output, which was useful in troubleshooting inputs and ETLs.

Deprecated: This node has been deprecated in favor of Debug Payloads, which support the same feature.

9. Set Context Node

The Set Context node is generally used at the start of a flow but can optionally be used to "Reset" the context at any point throughout the flow. The result of the transform completely replaces your existing context.

Use Case Example: You have Orders and Lines. You might start the flow storing orders in your context, but later query lines. Once you have the lines, you would want to merge the lines with the corresponding orders and then reset the context with that combined value. This way you aren't carrying order data separate from the orders and lines, which improves processing speed.

Configuration Parameters

Parameter Description
Transform The result replaces your existing context with the transform's result

10. Merge Context Node

The Merge Context node allows you to merge new values on top of existing values in the context. The node uses a deep merge to accomplish this.

Because this uses a deep merge, it allows you to merge data on top of data. This can be beneficial if you have an array of records that you want to add to. However, it can be problematic if you have an updated version of the same array, as it will add the new records on top. It does not do any intelligent merge to update records that already exist. Any time two properties align in name, the values will be merged with the new values overriding existing ones.

Configuration Parameters

Parameter Description
Transform The result is merged into your existing context

Merge Example

Existing Context:

{
"data": {
"Parts": [{ "Part_No": "P123", "Revision": "A" }]
}
}

Merge Transform:

{
"data": {
"Parts": [{ "Part_No": "A153", "Revision": "B" }],
"Customer": { "name": "Acme Industries" }
}
}

Result (both Parts arrays combined):

{
"data": {
"Parts": [
{ "Part_No": "A153", "Revision": "B" },
{ "Part_No": "P123", "Revision": "A" }
],
"Customer": { "name": "Acme Industries" }
}
}

11. Remove From Context Node

The Remove From Context node is often underutilized. It allows you to remove values from the context at any depth. It's best practice to remove values from the context that will not be re-used to speed up execution, as there is less data being transferred from node to node.

Configuration Parameters

Parameter Description
Transform Array of paths to remove (only the last property in the dot chain is removed)

Example

Transform:

["data.Parts"]

This removes the "Parts" property from "data" in the context.

12. Best Practices

Debug Node Standards

  • Descriptions are important - use them to highlight specific test cases
  • Test cases are required to prove testing prior to deployment
  • Never directly log the input data in log message transforms
  • Use appropriate log levels (Debug for routine items, Warn/Error for problems)

Context Management Standards

  • Label everything in context with a wrapping variable - never set or merge context with a transform that is just "$"
  • Use {"plexResponse": $} instead of $
  • If context has unnecessary data, prune it prior to storage
  • If context is large and no longer needed, remove it
Critical Warning: Context is shared with each node downstream. Forks and broadcasts will significantly increase memory usage by a factor equal to the number of concurrent transactions. It is very important to keep a clean context with only what you need during these situations.

13. Troubleshooting

Symptom Likely Cause Resolution
Log entries not appearing Flow not deployed or log level too low Deploy flow; check Data Flow logLevelId setting
Context/payload not logged Data contains properties starting with $ Rename properties or use message transform to extract specific values
Context not persisting Using Source node context (design-time only) Use Set Context or Merge Context nodes for deployed flows
Merge duplicating records Deep merge adds arrays together Use Set Context to replace, or transform to deduplicate first
Flow running slowly Context contains too much data Use Remove From Context to prune unnecessary data
Preview File not downloading Wrong encoding or running deployed flow Use correct encoding; Preview File only works in designer
  • Data Flow Design Standards - Complete standards for flow development
  • Flow Control Nodes Complete Guide - Error handling and flow structure
  • Source & Trigger Nodes Complete Guide - Production triggers and listeners
  • Data Flow Logs - Viewing and filtering production logs
  • Platform Website: fuuz.com

15. Revision History

Version Date Author Description
1.0 2025-01-01 Craig Scott Initial release - Complete guide covering 7 debugging nodes (Source, File Source, Data Change Source, Preview File, Log, Echo, Transition) and 3 context nodes (Set, Merge, Remove).
    • Related Articles

    • Data Flow Nodes Reference

      Fuuz Data Flow Nodes - Complete Reference Article Type: Reference Audience: Developers, App Admins, Solution Architects Module: Data Flows / Data Ops Applies to Versions: All 1. Overview The Fuuz Industrial Operations Platform provides a ...
    • Script & Validation Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with JSONata or JavaScript 1. Overview What are ...
    • Fuuz Platform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with GraphQL 1. Overview What are Fuuz Platform ...
    • Data Flow Design Standards

      Article Type: Standard / Reference Audience: Solution Architects, Application Designers, Developers Module: Fuuz Platform - Data Flow Designer Applies to Versions: 2025.12+ 1. Overview Data Flow Design Standards define the mandatory requirements and ...
    • Flow Control Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Flow Control Nodes? Flow Control nodes ...