Transform Nodes

Transform Nodes

Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: None - This guide assumes no prior knowledge

1. Overview

What are Transform Nodes?

Transform nodes convert data between formats and manipulate collections of records. They are the workhorses of data processing in Fuuz, enabling flows to accept data in one format (CSV from a legacy system), convert it to another format (JSON for processing), filter and group records, and output to yet another format (XML for an ERP system). All of this happens without writing custom code.

Why are they important?

Enterprise systems rarely speak the same language. Your ERP exports XML, your warehouse system expects CSV, your mobile app needs JSON, and your analytics platform wants aggregated summaries. Transform nodes bridge these gaps, enabling seamless data exchange without manual file manipulation or custom integration code.

What you'll learn in this guide:

  • How each of the 7 transform node types works
  • Complete parameter configurations with defaults and edge cases
  • Real-world manufacturing, pharmaceutical, and logistics scenarios
  • Input/output specifications for each transformation
  • Common errors and resolution patterns

The 7 Transform Node Types

Node Type Primary Purpose Best For
JSON to CSV Convert JSON array to comma-separated values string Exports, legacy system integration, spreadsheet generation
CSV to JSON Parse CSV string into JSON array of objects File imports, batch data loading, external data ingestion
JSON to XML Convert JSON object/array to XML document string ERP integration, SOAP services, EDI transactions
XML to JSON Parse XML document into JSON object SAP integration, legacy SOAP APIs, XML file processing
Unique Array Remove duplicate items from array based on field Data deduplication, master data cleanup, merge operations
Filter Array Select items matching condition expression Business rule application, data selection, conditional processing
Group Array Group items by field(s) with optional aggregation Reporting, summaries, consolidation, analytics preparation

Understanding Input & Output Transforms

All transform nodes support two additional transformation layers available in Advanced Configuration:

  • Input Transform: Reformats the incoming payload/context BEFORE the main node operation executes. Use this to extract specific data from a larger payload or restructure data for processing.
  • Output Transform: Reformats the result AFTER the main node operation completes. The original input is accessible via the $$ binding, allowing you to merge original and transformed data.
Key Concept - $ vs $$ Bindings: In Fuuz transforms, $ refers to the current payload (input to the node), while $$ refers to the original context or pre-transform data. This becomes important when building complex transformations that need access to both original and processed data.

2. JSON to CSV Node

Purpose & Use Cases

The JSON to CSV node converts a JSON array of objects into a comma-separated values (CSV) string. Each object in the array becomes a row, and object properties become columns. This enables data export to spreadsheets, legacy systems, and any application expecting tabular text data.

When to use:

  • Exporting data for Excel, Google Sheets, or other spreadsheet applications
  • Integrating with legacy systems that only accept CSV input
  • Generating reports for email distribution
  • Creating data files for FTP transfer to external partners
  • Bulk export for data warehousing or analytics platforms

Configuration Parameters

Parameter Type Required Default Description
Node Name String Yes "JSON to CSV" Display name in flow designer
Field Delimiter String No , (comma) Character separating fields. Use tab (\t) for TSV, semicolon (;) for European locales, pipe (|) for special cases.
Field Wrap String No " (double quote) Character used to wrap fields containing special characters. Applied automatically when field contains delimiter, newline, or wrap character.
End of Line Enum No \n (Unix LF) Line ending format: \n (Unix/Linux/Mac), \r\n (Windows), \r (legacy Mac)
Prepend Header Boolean No true Include column headers as first row. Set false for appending to existing files.
Keys Array No All keys from first object Explicit list of fields to include and their order. Unspecified fields are excluded.
Expand Array Objects Boolean No false Flatten nested objects into separate columns (e.g., address.city becomes address_city)
Unwind Arrays Boolean No false Create multiple rows for nested arrays. Each array item becomes a separate row with parent data repeated.
Input Transform JSONata No null Transform payload before CSV conversion (Advanced Configuration)
Output Transform JSONata No null Transform result after CSV conversion (Advanced Configuration)
Edge Case - European Locales: Many European countries use semicolon (;) as field delimiter because comma is used as decimal separator. When exporting for German, French, or Italian systems, set Field Delimiter to semicolon.

Input/Output Specifications

Input: JSON array of objects (each object becomes a row)

Example Input:

[
{
"orderNumber": "ORD-001",
"customer": "ACME Manufacturing",
"amount": 1500.00,
"status": "Shipped"
},
{
"orderNumber": "ORD-002",
"customer": "Beta Industries",
"amount": 2750.50,
"status": "Pending"
},
{
"orderNumber": "ORD-003",
"customer": "Gamma Corp",
"amount": 890.25,
"status": "Delivered"
}
]

Output: CSV string

orderNumber,customer,amount,status
ORD-001,ACME Manufacturing,1500,Shipped
ORD-002,Beta Industries,2750.5,Pending
ORD-003,Gamma Corp,890.25,Delivered

Output with Unwind Arrays (nested line items):

Input with nested array:

[
{
"orderNumber": "ORD-001",
"customer": "ACME Manufacturing",
"items": [
{"partNumber": "P-100", "quantity": 50},
{"partNumber": "P-200", "quantity": 25}
]
}
]

Output with Unwind Arrays = true:

orderNumber,customer,items.partNumber,items.quantity
ORD-001,ACME Manufacturing,P-100,50
ORD-001,ACME Manufacturing,P-200,25

Real-World Use Cases

Use Case 1: Daily Shipment Report for Logistics Partner (Basic)

A distribution company sends daily shipment manifests to their 3PL partner via SFTP. The partner's system requires CSV with specific column order and Windows line endings.

Configuration:

  • Keys: ["shipmentId", "destination", "weight", "pieces", "carrier"]
  • End of Line: \r\n (Windows)
  • Prepend Header: true

Use Case 2: SAP Material Master Export for German ERP (Intermediate)

A global manufacturer exports material master data to a German subsidiary running SAP. The German system expects semicolon delimiters and specific field naming.

Configuration:

  • Field Delimiter: ; (semicolon)
  • Keys: ["MATNR", "MAKTX", "MEINS", "MTART", "MATKL"]
  • Input Transform: Maps Fuuz field names to SAP field codes

Use Case 3: Production Order Export with Line Items (Advanced)

An automotive parts manufacturer exports production orders with multiple component lines to their legacy MRP system. Each order has multiple BOM components that must become separate rows while retaining parent order information.

Configuration:

  • Unwind Arrays: true
  • Expand Array Objects: true
  • Keys: ["workOrderId", "productCode", "components.partNumber", "components.requiredQty", "components.warehouse"]

Result: A work order with 15 BOM components becomes 15 CSV rows, each containing the parent work order info plus one component per row.

Error Handling Patterns

Error Condition Cause Resolution Pattern
Empty output / no data Input is not an array or is empty array Add Fork node before to check array length. Use Input Transform to ensure array format.
Missing columns in output First object missing fields present in later objects Explicitly define Keys parameter to ensure all columns appear regardless of first object.
Special characters breaking import Field contains delimiter, newline, or quote character Field Wrap handles this automatically. Verify target system supports quoted fields.
Nested objects appearing as [object Object] Complex nested structure not flattened Enable Expand Array Objects OR use Input Transform to flatten before conversion.
Best Practice: Always explicitly define the Keys parameter for production integrations. This ensures consistent column order and prevents issues when source data has varying field presence.

3. CSV to JSON Node

Purpose & Use Cases

The CSV to JSON node parses comma-separated values text into a JSON array of objects. Each row becomes an object, with column headers (or specified field names) as property keys. This is essential for importing data from spreadsheets, legacy exports, and any tabular text source.

When to use:

  • Importing data from Excel exports or Google Sheets downloads
  • Processing files received via FTP from external partners
  • Loading batch data from legacy systems
  • Parsing user-uploaded CSV files for bulk record creation
  • ETL pipelines ingesting tabular data from various sources

Configuration Parameters

Parameter Type Required Default Description
Node Name String Yes "CSV to JSON" Display name in flow designer
Field Delimiter String No , (comma) Character separating fields. Must match source file format.
Quote Character String No " (double quote) Character used to wrap fields containing special characters.
Escape Character String No " (double quote) Character used to escape quote characters within quoted fields.
Headers Array or Boolean No true (use first row) true = first row is headers; false = no headers (returns arrays); Array = explicit header names
Skip Empty Lines Boolean No true Ignore blank lines in source data.
Skip Lines Integer No 0 Number of lines to skip at beginning of file (for files with metadata rows before headers)
Dynamic Typing Boolean No true Automatically convert numbers and booleans from string representation.
Trim Values Boolean No true Remove leading/trailing whitespace from values.
Input Transform JSONata No null Transform payload before parsing (extract CSV string from larger payload)
Output Transform JSONata No null Transform result after parsing (restructure, rename fields, add metadata)
Edge Case - Files with Metadata Headers: Many ERP exports include report title, date, and other metadata in the first 2-3 rows before column headers. Use Skip Lines to bypass these rows and reach the actual column headers.

Input/Output Specifications

Input: CSV string (typically from File Source node or payload field)

Example Input:

partNumber,description,unitPrice,warehouse,quantityOnHand
P-100,"Bearing, Ball 6205",15.50,EAST,250
P-101,"Seal, Oil Type A",8.25,EAST,500
P-102,"Gasket Set, Complete",45.00,WEST,75

Output: JSON array of objects

[
{
"partNumber": "P-100",
"description": "Bearing, Ball 6205",
"unitPrice": 15.50,
"warehouse": "EAST",
"quantityOnHand": 250
},
{
"partNumber": "P-101",
"description": "Seal, Oil Type A",
"unitPrice": 8.25,
"warehouse": "EAST",
"quantityOnHand": 500
},
{
"partNumber": "P-102",
"description": "Gasket Set, Complete",
"unitPrice": 45.00,
"warehouse": "WEST",
"quantityOnHand": 75
}
]

Real-World Use Cases

Use Case 1: Daily Inventory Import from Legacy WMS (Basic)

A manufacturing plant receives nightly inventory snapshots from their legacy warehouse management system via CSV file drop. The flow parses the CSV and updates Fuuz inventory records.

Configuration:

  • Headers: true (file includes header row)
  • Dynamic Typing: true (convert quantities to numbers)
  • Trim Values: true (remove extra whitespace from legacy system)

Use Case 2: Vendor Price Update with Skip Lines (Intermediate)

A distributor receives weekly price updates from suppliers in files that include 3 metadata rows (supplier name, date, terms) before the column headers. The flow must skip these rows to properly parse pricing data.

Input File:

SUPPLIER: ACME Parts Inc.
DATE: 2024-01-15
TERMS: Net 30

SKU,Description,ListPrice,YourPrice
A100,Widget Standard,25.00,18.75
A101,Widget Deluxe,35.00,26.25

Configuration:

  • Skip Lines: 4 (bypasses metadata + empty line to reach headers)
  • Headers: true

Use Case 3: Multi-Format Import with Header Mapping (Advanced)

An automotive manufacturer receives parts data from multiple suppliers, each using different column names for the same data. One uses "PartNo", another uses "Item", and a third uses "SKU". All must map to the standardized "partNumber" field.

Configuration:

  • Headers: ["partNumber", "description", "price", "moq"] (explicit mapping overrides file headers)
  • Output Transform: Restructures and adds source metadata

Result: Regardless of source file column names, output always uses standardized field names for consistent downstream processing.

Error Handling Patterns

Error Condition Cause Resolution Pattern
Wrong number of columns per row Unquoted delimiter within field value Verify Quote Character setting matches source. Check for inconsistent quoting in source data.
Numbers appearing as strings Dynamic Typing disabled or unusual number format Enable Dynamic Typing OR use Output Transform to convert specific fields.
Empty or missing fields Source data has blank values between delimiters Expected behavior - empty values become empty strings or null. Use Filter Array to remove invalid records.
Encoding issues (garbled characters) Source file uses different character encoding (Latin-1, Windows-1252) Pre-process file to convert to UTF-8 OR use Input Transform to handle specific characters.
Best Practice: After CSV to JSON conversion, immediately add a Filter Array node to remove invalid records (missing required fields, invalid formats). Then use Validate node to enforce data quality before database operations.

4. JSON to XML Node

Purpose & Use Cases

The JSON to XML node converts JSON objects or arrays into XML document strings. This enables integration with enterprise systems that require XML input, including SAP, Oracle, and legacy SOAP web services. The node handles element naming, attribute conversion, and proper XML structure generation.

When to use:

  • Sending data to SAP IDocs or BAPI interfaces
  • Integration with legacy SOAP web services
  • Generating XML files for EDI transactions (EDIFACT, X12)
  • Creating configuration files or data exports in XML format
  • B2B integrations requiring XML document exchange

Configuration Parameters

Parameter Type Required Default Description
Node Name String Yes "JSON to XML" Display name in flow designer
Root Element String No "root" Name of the root XML element wrapping the entire document.
Array Item Element String No "item" Element name for array items when array is converted to XML.
Attribute Prefix String No "@" JSON keys starting with this prefix become XML attributes instead of elements.
Text Node Name String No "#text" JSON key that becomes text content of parent element (mixed content).
CDATA Fields Array No [] Field names whose values should be wrapped in CDATA sections (for HTML/XML content).
Include Declaration Boolean No true Include <?xml version="1.0"?> declaration at beginning.
Pretty Print Boolean No false Format output with indentation for readability. Disable for production to reduce size.

Input/Output Specifications

Input: JSON object or array

Example Input (with attribute prefix):

{
"order": {
"@id": "ORD-12345",
"@status": "Pending",
"customer": "ACME Manufacturing",
"items": [
{"partNumber": "P-100", "quantity": 50},
{"partNumber": "P-200", "quantity": 25}
]
}
}

Output: XML string

<?xml version="1.0" encoding="UTF-8"?>
<root>
<order id="ORD-12345" status="Pending">
<customer>ACME Manufacturing</customer>
<items>
<item>
<partNumber>P-100</partNumber>
<quantity>50</quantity>
</item>
<item>
<partNumber>P-200</partNumber>
<quantity>25</quantity>
</item>
</items>
</order>
</root>

Real-World Use Cases

Use Case 1: SAP IDoc Generation (Basic)

A manufacturer sends production orders to SAP via IDoc interface. The flow transforms internal order data into SAP's required XML structure with proper segment naming and attributes.

Configuration:

  • Root Element: "ORDERS05"
  • Array Item Element: "E1EDP01" (SAP segment name)
  • Include Declaration: true

Use Case 2: EDI Trading Partner Integration (Intermediate)

A distributor exchanges purchase orders with retail partners using XML-based EDI. Each partner requires slightly different XML structures. Input Transforms reshape data to match partner-specific requirements before XML conversion.

Use Case 3: SOAP Web Service Call with Complex Payload (Advanced)

A logistics company integrates with a carrier's SOAP API for shipment booking. The API requires a complex nested XML structure with namespaces, attributes, and CDATA sections for special instructions.

Configuration:

  • Attribute Prefix: "@" (for XML attributes like @type, @version)
  • CDATA Fields: ["specialInstructions", "deliveryNotes"]
  • Input Transform: Adds namespace prefixes and structures data per SOAP envelope requirements

Error Handling Patterns

Error Condition Cause Resolution Pattern
Invalid XML element names JSON keys contain spaces or invalid XML characters Use Input Transform to rename keys to valid XML names before conversion.
Special characters in content Content contains <, >, &, or quotes Automatically escaped by default. For HTML content, add field to CDATA Fields list.
Target system rejects XML Missing required elements or wrong structure Compare output with target system's XSD schema. Use Input Transform to ensure required structure.

5. XML to JSON Node

Purpose & Use Cases

The XML to JSON node parses XML documents into JSON objects. Attributes, elements, text content, and CDATA sections are converted to their JSON equivalents. This is essential for processing data received from SAP, legacy SOAP services, and XML-based integrations.

When to use:

  • Processing responses from SAP BAPI/RFC calls
  • Consuming SOAP web service responses
  • Parsing XML files received from external partners
  • Converting legacy data exports to modern JSON format

Configuration Parameters

Parameter Type Required Default Description
Node Name String Yes "XML to JSON" Display name in flow designer
Attribute Prefix String No "@" Prefix added to JSON keys for XML attributes.
Text Node Name String No "#text" JSON key for element text content (when element has both attributes and text).
Merge Attributes Boolean No false Merge attributes directly into parent object without prefix (flatter structure).
Explicit Array Array No [] Element names that should always be arrays, even with single item.
Ignore Attributes Boolean No false Discard all XML attributes (simpler output but loses attribute data).
Trim Text Boolean No true Remove whitespace around text content.
Critical Edge Case - Explicit Array: XML elements that appear once become objects, but multiple occurrences become arrays. This inconsistency breaks downstream processing. Use Explicit Array to force specific elements to always be arrays, even with single items.

Input/Output Specifications

Input: XML string

Example Input:

<order id="ORD-12345" status="Shipped">
<customer type="business">ACME Manufacturing</customer>
<lineItem>
<partNumber>P-100</partNumber>
<quantity>50</quantity>
</lineItem>
<lineItem>
<partNumber>P-200</partNumber>
<quantity>25</quantity>
</lineItem>
</order>

Output: JSON object

{
"order": {
"@id": "ORD-12345",
"@status": "Shipped",
"customer": {
"@type": "business",
"#text": "ACME Manufacturing"
},
"lineItem": [
{
"partNumber": "P-100",
"quantity": "50"
},
{
"partNumber": "P-200",
"quantity": "25"
}
]
}
}

Real-World Use Cases

Use Case 1: SAP IDoc Processing (Basic)

A manufacturer receives material master updates from SAP via IDoc XML. The flow parses the IDoc, extracts relevant segments, and updates local inventory records.

Configuration:

  • Explicit Array: ["E1MARAM", "E1MAKTM"] (SAP segments that may have 1 or many items)
  • Trim Text: true

Use Case 2: SOAP Response Processing (Intermediate)

A logistics company calls a carrier's SOAP API for rate quotes. The XML response contains namespaces and complex nested structures that must be parsed and simplified for internal use.

Use Case 3: EDI Document Translation (Advanced)

A retailer receives purchase orders in XML format from multiple trading partners. Each partner uses slightly different XML structures. The flow normalizes all formats into a standard JSON structure for consistent processing.

Error Handling Patterns

Error Condition Cause Resolution Pattern
Sometimes array, sometimes object XML element count varies between 1 and many Add element name to Explicit Array list to always return array.
Malformed XML error Invalid XML structure (unclosed tags, invalid characters) Wrap in Try/Catch. Log failed XML for investigation. Return error to source system.
Namespace prefixes in keys XML uses namespaces (ns:elementName) Use Output Transform to remove namespace prefixes from keys.

6. Array Manipulation Nodes

Array manipulation nodes process collections of records - filtering, deduplicating, and grouping data for downstream operations. These nodes work on JSON arrays and are essential for data quality, business rule application, and report generation.

6.1 Unique Array Node

Purpose: Removes duplicate records from an array based on one or more fields. When duplicates are found, keeps either the first or last occurrence.

Configuration Parameters:

Parameter Type Default Description
Unique Field JSONata / String Required Field or expression to determine uniqueness. Can be simple field name or complex expression combining multiple fields.
Keep Enum First When duplicates found: First (keep earliest), Last (keep most recent)

Input Example:

[
{"customerId": "C001", "name": "ACME Corp", "lastOrder": "2024-01-10"},
{"customerId": "C002", "name": "Beta Inc", "lastOrder": "2024-01-12"},
{"customerId": "C001", "name": "ACME Corporation", "lastOrder": "2024-01-15"},
{"customerId": "C003", "name": "Gamma LLC", "lastOrder": "2024-01-14"}
]

Output (Unique Field = customerId, Keep = Last):

[
{"customerId": "C002", "name": "Beta Inc", "lastOrder": "2024-01-12"},
{"customerId": "C001", "name": "ACME Corporation", "lastOrder": "2024-01-15"},
{"customerId": "C003", "name": "Gamma LLC", "lastOrder": "2024-01-14"}
]

Use Cases:

  • Basic - Remove duplicate part numbers: Unique Field = partNumber, Keep = First
  • Intermediate - Composite key deduplication: Unique Field = warehouseCode & "-" & partNumber (combines fields)
  • Advanced - Keep most recent version: Unique Field = documentId, Keep = Last (assumes newest record is last in array)

6.2 Filter Array Node

Purpose: Selects array items matching a condition expression. Items where the expression evaluates to true are included in output; others are excluded.

Configuration Parameters:

Parameter Type Default Description
Filter Expression JSONata Required Expression evaluated for each item. Return true to include, false to exclude.

Input Example:

[
{"orderId": "ORD-001", "status": "Pending", "amount": 500},
{"orderId": "ORD-002", "status": "Shipped", "amount": 1500},
{"orderId": "ORD-003", "status": "Pending", "amount": 250},
{"orderId": "ORD-004", "status": "Delivered", "amount": 3000}
]

Output (Filter Expression = status = "Pending"):

[
{"orderId": "ORD-001", "status": "Pending", "amount": 500},
{"orderId": "ORD-003", "status": "Pending", "amount": 250}
]

Use Cases:

  • Basic - Status filter: Filter Expression = status = "Active"
  • Intermediate - Numeric threshold: Filter Expression = quantity > 0 and unitPrice >= 10
  • Advanced - Complex business rule: Filter Expression = (status = "Ready" or priority = "High") and assignedTo != null
Performance Tip: Place Filter Array nodes early in your flow to reduce the data volume processed by subsequent nodes. Filtering 10,000 records down to 500 before a database lookup improves performance dramatically.

6.3 Group Array Node

Purpose: Groups array items by one or more fields and optionally applies aggregation functions (sum, count, average, min, max) to numeric fields within each group.

Configuration Parameters:

Parameter Type Default Description
Group By Array of Strings Required Field name(s) to group by. Multiple fields create composite grouping.
Aggregations Array of Objects [] Each object: {field, function, alias}. Functions: sum, count, avg, min, max
Include Items Boolean false Include array of original items within each group (for drill-down access).

Input Example:

[
{"warehouse": "EAST", "product": "Widget-A", "quantity": 100, "value": 500},
{"warehouse": "EAST", "product": "Widget-B", "quantity": 50, "value": 250},
{"warehouse": "WEST", "product": "Widget-A", "quantity": 75, "value": 375},
{"warehouse": "EAST", "product": "Widget-A", "quantity": 25, "value": 125}
]

Configuration: Group By = ["warehouse", "product"], Aggregations = [{field: "quantity", function: "sum", alias: "totalQty"}, {field: "value", function: "sum", alias: "totalValue"}]

Output:

[
{"warehouse": "EAST", "product": "Widget-A", "totalQty": 125, "totalValue": 625},
{"warehouse": "EAST", "product": "Widget-B", "totalQty": 50, "totalValue": 250},
{"warehouse": "WEST", "product": "Widget-A", "totalQty": 75, "totalValue": 375}
]

Real-World Use Cases:

Use Case 1: Daily Production Summary (Basic)

A manufacturing plant summarizes daily production by production line. Group By = ["productionLine"], Aggregations = sum of "unitsProduced", count of records, avg of "cycleTime".

Use Case 2: Customer Order Analysis (Intermediate)

An e-commerce company analyzes orders by customer and product category. Group By = ["customerId", "category"], Include Items = true for drill-down to individual orders.

Use Case 3: Inventory Valuation by Location and Status (Advanced)

A pharmaceutical distributor values inventory by warehouse, product, and lot status (quarantine, released, expired). Multiple aggregations calculate total quantity, total value, and average days to expiry per group.

7. Common Patterns & Best Practices

Transform Chaining Patterns

Pattern 1: CSV Import Pipeline

File Source → CSV to JSON → Filter Array (remove invalid) → Unique Array (deduplicate) → Validate → Mutate (database insert)

Pattern 2: Cross-System Data Exchange

Query (get data) → Filter Array (select relevant) → JSON to XML → HTTP Connector (send to SAP)

Pattern 3: Report Generation

Query (raw data) → Filter Array (date range) → Group Array (summarize) → JSON to CSV → System Email (attach report)

Performance Optimization

  • Filter Early: Place Filter Array nodes as early as possible to reduce data volume for subsequent operations
  • Minimize Transform Chains: Combine simple operations into single Input/Output Transforms rather than chaining multiple nodes
  • Use Keys Parameter: In JSON to CSV, explicitly specify Keys to avoid processing unnecessary fields
  • Batch Large Arrays: For arrays exceeding 10,000 items, consider processing in batches using Collect node

Data Quality Patterns

  • Always validate after parsing: CSV to JSON → Validate node enforces data quality before database operations
  • Handle encoding issues: For legacy systems, document expected character encoding and test with special characters
  • Log transformation failures: Wrap transforms in Try/Catch and log failed records for manual review
  • Use Explicit Array: For XML parsing, always specify elements that should be arrays to prevent object/array inconsistencies

8. Troubleshooting

Symptom Likely Cause Resolution
CSV parse returns wrong number of columns Delimiter mismatch or unquoted special characters Verify Field Delimiter matches source. Check Quote Character setting.
XML element sometimes object, sometimes array Element count varies between 1 and multiple Add element name to Explicit Array parameter.
Filter returns empty array unexpectedly Expression syntax error or data type mismatch Add Echo node before filter to inspect data. Check string vs number comparisons.
Group aggregations showing NaN or null Aggregating non-numeric field or null values Verify field contains numbers. Filter out null values before grouping.
Nested objects showing as [object Object] in CSV Complex nested structure not flattened Enable Expand Array Objects OR use Input Transform to flatten.
Transform returns undefined or empty Input not in expected format (object vs array) Verify input type matches node expectations. JSON to CSV expects array.
Diagnostic Approach: Always add Echo node before and after transform nodes during development. This shows exact input and output, making issues immediately visible.
  • Source & Trigger Nodes Complete Guide - Initiating flows and trigger mechanisms
  • Flow Control Nodes Complete Guide - Routing, branching, and parallel processing
  • Script Nodes Complete Guide - JSONata and JavaScript custom logic
  • Fuuz Custom JSONata Library - Platform-specific functions and bindings
  • Data Mapping Designer - Visual field-to-field transformation with schema validation
  • Platform Website: fuuz.com

    • Related Articles

    • Transform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: None - This guide assumes no prior knowledge 1. Overview What are Transform Nodes? Transform nodes ...
    • Notification Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, Notification Channels configured 1. Overview What are ...
    • Integration Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Integration Applies to Versions: Platform 3.0+ Prerequisites: Understanding of Data Flow concepts, Connection setup 1. Overview What are Integration Nodes? Integration ...
    • Data Flow Nodes Reference

      Fuuz Data Flow Nodes - Complete Reference Article Type: Reference Audience: Developers, App Admins, Solution Architects Module: Data Flows / Data Ops Applies to Versions: All 1. Overview The Fuuz Industrial Operations Platform provides a ...
    • Script & Validation Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with JSONata or JavaScript 1. Overview What are ...