Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: Basic understanding of Data Flow concepts
What are Debugging & Context Nodes?
Debugging nodes help you test, troubleshoot, and monitor your data flows during development and production. Context nodes manage shared state that persists throughout a flow execution, allowing you to reference and accumulate data as the flow progresses.
Why are they important?
Enterprise integrations require robust testing capabilities to simulate various input scenarios without connecting to live systems. Context management is essential for flows that process hierarchical data (like orders with line items) or need to accumulate information across multiple operations. Proper logging ensures production issues can be diagnosed quickly.
| Category | Node | Purpose |
|---|---|---|
| Debugging | Source | Paste test payload/context for flow testing |
| File Source | Attach local file to simulate file-based input | |
| Data Change Source | Simulate data change events for testing | |
| Preview File | Download payload as file during design (PDFs, CSVs) | |
| Log | Write messages to Data Flow Logs for monitoring | |
| Echo | Pass-through for visual flow organization | |
| Transition (DEPRECATED) | Pipe last execution payload (use Debug Payloads instead) | |
| Context | Set Context | Replace entire context with new values |
| Merge Context | Deep merge new values into existing context | |
| Remove From Context | Remove specific properties from context |
The Source node allows you to paste a payload from anywhere directly into the node's editor panel and then consume it in your flow for testing purposes. In production, the contents of this node's payload would be replaced by some other listener, trigger, or API call.
| Parameter | Description |
|---|---|
| Payload | The payload to pass to the next node |
| Context | The context to pass to the next node (simulates context mid-flow) |
The File Source node provides the option to attach a file from your local system to simulate receiving data from a third party. This is useful when testing a flow that would normally be called through a Request/Response or would pull a file from an FTP you don't currently have access to.
Another useful feature is the ability to point it to a file directory, allowing you to simulate an integration that will later use some version of an FTP file folder setup.
| Parameter | Description |
|---|---|
| File | Drop a file or select from local file explorer |
| File Directory | Simulate the directory where this file would be stored |
| Output Encoding | UTF8 or Base64 |
| Encoding | Example File Extensions |
|---|---|
| UTF8 | XML, JSON, CSV, XLS (text documents) |
| Base64 | jpg, png, PDF (binary formats) |
Fuuz is an event-driven architecture that allows you to listen on event streams for many different types. The Data Change Source node allows you to simulate data change events for testing, as the data change event stream is not started until you deploy the flow.
Data change events are emitted when a record in a data model is created, updated, or deleted. This allows you to perform real-time integration or business logic based on underlying data changes. The data models can include any system tables or custom tables you define.
| Parameter | Description |
|---|---|
| API | Application or System |
| Type | The Data Model Type (Data Flow, Units, etc.) |
| Operation | Create, Update, or Delete |
| Before Data | The record prior to the change |
| After Data | The record after the change |
The Preview File node allows you to download the payload input as a file. This is useful when creating CSV reports or Document Designer PDFs when you need to see the results during development. This node has no function when the data flow is deployed - it only works in the designer.
| Parameter | Description |
|---|---|
| File Content | Transform that points to where in your payload/context the data is located |
| File Name | String or transform to dynamically name the file |
| File Encoding | UTF8 (text) or Base64 (images, PDFs) |
JSON file: Use $jsonStringify(content) to convert the object to a string first.
PDF document: If the PDF was not returned Base64 encoded, call $base64encode(content) on the content string.
The Log node allows you to log data within a flow when it is triggered. Logs are not created when the flow is run in the designer - they are only generated in deployed flows. The resulting logs are viewable from the Data Flow Logs grid.
| Parameter | Description |
|---|---|
| Message Transform | Dynamic message (e.g., "PO " & $state.context.poNumber & " failed import") |
| Level | Log importance level (Trace, Debug, Info, Warn, Error, Fatal) |
| Include Context | Include execution context in log (default: false) |
| Include Payload | Include execution payload in log (default: false) |
$, they will not be logged. It is strongly recommended to change the default message transform and never directly log the input data in the message.
| Level | Sort | Description |
|---|---|---|
| Trace | 0 | Exact details of program state |
| Debug | 1 | Diagnostic information about specific behavior |
| Info | 2 | Normal, significant behavior (default capture level) |
| Warn | 3 | Unusual behavior outside of errors |
| Error | 4 | Important procedural failure |
| Fatal | 5 | Shutdown due to severe problems |
The Echo node is generally used for visual aesthetics and has no real functionality outside of passing input payload/context to output payload/context unchanged. When deployed, this node will continue to pass input to output.
This can be useful when using a Fork route that passes directly through, where you want something to visually appear in the middle of the flow for clarity.
The Transition node often accompanied the Source Node at the start of a flow. By incorporating this node, you could pipe in the payload of the last flow execution as an output, which was useful in troubleshooting inputs and ETLs.
The Set Context node is generally used at the start of a flow but can optionally be used to "Reset" the context at any point throughout the flow. The result of the transform completely replaces your existing context.
Use Case Example: You have Orders and Lines. You might start the flow storing orders in your context, but later query lines. Once you have the lines, you would want to merge the lines with the corresponding orders and then reset the context with that combined value. This way you aren't carrying order data separate from the orders and lines, which improves processing speed.
| Parameter | Description |
|---|---|
| Transform | The result replaces your existing context with the transform's result |
The Merge Context node allows you to merge new values on top of existing values in the context. The node uses a deep merge to accomplish this.
Because this uses a deep merge, it allows you to merge data on top of data. This can be beneficial if you have an array of records that you want to add to. However, it can be problematic if you have an updated version of the same array, as it will add the new records on top. It does not do any intelligent merge to update records that already exist. Any time two properties align in name, the values will be merged with the new values overriding existing ones.
| Parameter | Description |
|---|---|
| Transform | The result is merged into your existing context |
Existing Context:
{
"data": {
"Parts": [{ "Part_No": "P123", "Revision": "A" }]
}
}Merge Transform:
{
"data": {
"Parts": [{ "Part_No": "A153", "Revision": "B" }],
"Customer": { "name": "Acme Industries" }
}
}Result (both Parts arrays combined):
{
"data": {
"Parts": [
{ "Part_No": "A153", "Revision": "B" },
{ "Part_No": "P123", "Revision": "A" }
],
"Customer": { "name": "Acme Industries" }
}
}The Remove From Context node is often underutilized. It allows you to remove values from the context at any depth. It's best practice to remove values from the context that will not be re-used to speed up execution, as there is less data being transferred from node to node.
| Parameter | Description |
|---|---|
| Transform | Array of paths to remove (only the last property in the dot chain is removed) |
Transform:
["data.Parts"]
This removes the "Parts" property from "data" in the context.
{"plexResponse": $} instead of $| Symptom | Likely Cause | Resolution |
|---|---|---|
| Log entries not appearing | Flow not deployed or log level too low | Deploy flow; check Data Flow logLevelId setting |
| Context/payload not logged | Data contains properties starting with $ | Rename properties or use message transform to extract specific values |
| Context not persisting | Using Source node context (design-time only) | Use Set Context or Merge Context nodes for deployed flows |
| Merge duplicating records | Deep merge adds arrays together | Use Set Context to replace, or transform to deduplicate first |
| Flow running slowly | Context contains too much data | Use Remove From Context to prune unnecessary data |
| Preview File not downloading | Wrong encoding or running deployed flow | Use correct encoding; Preview File only works in designer |
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0 | 2025-01-01 | Craig Scott | Initial release - Complete guide covering 7 debugging nodes (Source, File Source, Data Change Source, Preview File, Log, Echo, Transition) and 3 context nodes (Set, Merge, Remove). |