Transform Nodes

Transform Nodes

Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: None - This guide assumes no prior knowledge

1. Overview

What are Transform Nodes?

Transform nodes convert data between formats and manipulate collections of records. They are the workhorses of data processing in Fuuz, enabling flows to accept data in one format (CSV from a legacy system), convert it to another format (JSON for processing), filter and group records, and output to yet another format (XML for an ERP system). All of this happens without writing custom code.

Why are they important?

Enterprise systems rarely speak the same language. Your ERP exports XML, your warehouse system expects CSV, your mobile app needs JSON, and your analytics platform wants aggregated summaries. Transform nodes bridge these gaps, enabling seamless data exchange without manual file manipulation or custom integration code.

The 7 Transform Node Types

Node Type Primary Purpose Best For
JSON to CSV Convert JSON array to comma-separated values string Exports, legacy system integration, spreadsheet generation
CSV to JSON Parse CSV string into JSON array of objects File imports, batch data loading, external data ingestion
JSON to XML Convert JSON object/array to XML document string ERP integration, SOAP services, EDI transactions
XML to JSON Parse XML document into JSON object SAP integration, legacy SOAP APIs, XML file processing
Unique Array Remove duplicate items from array based on field Data deduplication, master data cleanup, merge operations
Filter Array Select items matching condition expression Business rule application, data selection, conditional processing
Group Array Group items by field(s) with optional aggregation Reporting, summaries, consolidation, analytics preparation

Understanding Input & Output Transforms

All transform nodes support two additional transformation layers in Advanced Configuration:

  • Input Transform: Reformats the incoming payload BEFORE the main node operation executes.
  • Output Transform: Reformats the result AFTER the main node operation completes. Original input accessible via $$ binding.
Key Concept - $ vs $$ Bindings: In Fuuz transforms, $ refers to the current payload (input to the node), while $$ refers to the original context or pre-transform data.

2. JSON to CSV Node

Purpose & Use Cases

The JSON to CSV node converts a JSON array of objects into a comma-separated values (CSV) string. Each object becomes a row, and object properties become columns. Essential for data export to spreadsheets, legacy systems, and any application expecting tabular text data.

Configuration Parameters

Parameter Type Default Description
Field Delimiter String , (comma) Character separating fields. Use semicolon (;) for European locales, tab (\t) for TSV.
Field Wrap String " (double quote) Character used to wrap fields containing special characters.
End of Line Enum \n (Unix) Line ending: \n (Unix/Mac), \r\n (Windows), \r (legacy Mac)
Prepend Header Boolean true Include column headers as first row.
Keys Array All keys from first object Explicit list of fields and their order. Unspecified fields excluded.
Expand Array Objects Boolean false Flatten nested objects into separate columns (address.city becomes address_city).
Unwind Arrays Boolean false Create multiple rows for nested arrays. Parent data repeated per array item.
Edge Case - European Locales: Many European countries use semicolon (;) as field delimiter because comma is used as decimal separator. When exporting for German, French, or Italian systems, set Field Delimiter to semicolon.

Input/Output Example

Input: JSON array of objects

[
{"orderNumber": "ORD-001", "customer": "ACME Manufacturing", "amount": 1500.00},
{"orderNumber": "ORD-002", "customer": "Beta Industries", "amount": 2750.50}
]

Output: CSV string

orderNumber,customer,amount
ORD-001,ACME Manufacturing,1500
ORD-002,Beta Industries,2750.5

Real-World Use Cases

  • Daily Shipment Report (Basic): Keys: ["shipmentId", "destination", "weight"], End of Line: \r\n (Windows)
  • SAP Export for German ERP (Intermediate): Field Delimiter: ; (semicolon), Keys: ["MATNR", "MAKTX", "MEINS"]
  • Production Order with BOM Lines (Advanced): Unwind Arrays: true creates separate row per component

3. CSV to JSON Node

Purpose & Use Cases

Parses comma-separated values text into a JSON array of objects. Each row becomes an object, with column headers as property keys. Essential for importing data from spreadsheets, legacy exports, and tabular text sources.

Configuration Parameters

Parameter Type Default Description
Field Delimiter String , (comma) Character separating fields. Must match source file format.
Headers Array/Boolean true true = first row is headers; Array = explicit header names to override file
Skip Lines Integer 0 Lines to skip at beginning (for metadata rows before headers).
Dynamic Typing Boolean true Automatically convert numbers and booleans from string representation.
Trim Values Boolean true Remove leading/trailing whitespace from values.
Edge Case - Files with Metadata Headers: Many ERP exports include report title, date, and metadata in the first 2-3 rows before column headers. Use Skip Lines to bypass these rows.

4. JSON to XML Node

Purpose & Use Cases

Converts JSON objects/arrays into XML document strings for integration with SAP IDocs, BAPI interfaces, SOAP web services, and EDI transactions.

Configuration Parameters

Parameter Default Description
Root Element "root" Name of root XML element wrapping entire document
Array Item Element "item" Element name for array items
Attribute Prefix "@" JSON keys starting with prefix become XML attributes (@id becomes id="...")
CDATA Fields [] Field names whose values wrap in CDATA sections (for HTML/XML content)
Include Declaration true Include <?xml version="1.0"?> declaration
Pretty Print false Format with indentation. Disable for production to reduce size.

5. XML to JSON Node

Purpose & Use Cases

Parses XML documents into JSON objects. Essential for processing SAP IDocs, SOAP responses, and XML-based integrations.

Configuration Parameters

Parameter Default Description
Attribute Prefix "@" Prefix for JSON keys representing XML attributes
Text Node Name "#text" JSON key for element text content (mixed content)
Explicit Array [] Elements that should always be arrays, even with single item
Ignore Attributes false Discard all XML attributes (simpler output)
Critical Edge Case - Explicit Array: XML elements that appear once become objects, but multiple occurrences become arrays. This inconsistency breaks downstream processing. Use Explicit Array to force specific elements to always be arrays, even with single items.

6. Array Manipulation Nodes

Array manipulation nodes process collections of records - filtering, deduplicating, and grouping data for downstream operations.

6.1 Unique Array Node

Purpose: Removes duplicate records from an array based on one or more fields.

Parameter Type Description
Unique Field JSONata/String Field or expression for uniqueness. Can combine fields: warehouseCode & "-" & partNumber
Keep Enum When duplicates found: First (keep earliest) or Last (keep most recent)

6.2 Filter Array Node

Purpose: Selects array items matching a condition expression. Items where expression evaluates to true are included.

Filter Expression Examples:

  • Basic: status = "Active"
  • Numeric: quantity > 0 and unitPrice >= 10
  • Complex: (status = "Ready" or priority = "High") and assignedTo != null
Performance Tip: Place Filter Array nodes early in your flow to reduce data volume. Filtering 10,000 records to 500 before database lookup improves performance dramatically.

6.3 Group Array Node

Purpose: Groups items by field(s) with optional aggregation functions (sum, count, avg, min, max).

Parameter Type Description
Group By Array Field name(s) to group by. Multiple fields create composite grouping.
Aggregations Array Each: {field, function, alias}. Functions: sum, count, avg, min, max
Include Items Boolean Include original items array within each group for drill-down access.

7. Best Practices

Transform Chaining Patterns

Pattern 1: CSV Import Pipeline

File Source → CSV to JSON → Filter Array (remove invalid) → Unique Array (deduplicate) → Validate → Mutate

Pattern 2: Cross-System Data Exchange

Query (get data) → Filter Array (select relevant) → JSON to XML → HTTP Connector (send to SAP)

Pattern 3: Report Generation

Query (raw data) → Filter Array (date range) → Group Array (summarize) → JSON to CSV → System Email

Performance Optimization

  • Filter Early: Place Filter Array nodes as early as possible to reduce data volume
  • Minimize Chains: Combine operations into single Input/Output Transforms
  • Use Keys: In JSON to CSV, explicitly specify Keys to avoid unnecessary fields
  • Batch Large Arrays: For arrays >10,000 items, process in batches

Data Quality Patterns

  • Validate after parsing: CSV to JSON → Validate node enforces data quality
  • Handle encoding: Document expected character encoding for legacy systems
  • Log failures: Wrap in Try/Catch and log failed records for review
  • Use Explicit Array: For XML parsing, specify elements that should be arrays

8. Troubleshooting

Symptom Likely Cause Resolution
CSV parse returns wrong columns Delimiter mismatch or unquoted special characters Verify Field Delimiter matches source file
XML element sometimes object, sometimes array Element count varies between 1 and multiple Add element name to Explicit Array parameter
Filter returns empty array Expression syntax error or type mismatch Add Echo node to inspect data. Check string vs number.
Group aggregations show NaN Aggregating non-numeric field or null values Filter out null values before grouping
[object Object] in CSV output Nested structure not flattened Enable Expand Array Objects or use Input Transform
Transform returns empty/undefined Input not in expected format Verify input type. JSON to CSV expects array.
Diagnostic Approach: Always add Echo node before and after transform nodes during development. This shows exact input and output, making issues immediately visible.
  • Source & Trigger Nodes Complete Guide - Initiating flows and trigger mechanisms
  • Flow Control Nodes Complete Guide - Routing, branching, and parallel processing
  • Script Nodes Complete Guide - JSONata and JavaScript custom logic
  • Fuuz Custom JSONata Library - Platform-specific functions and bindings
  • Data Mapping Designer - Visual field-to-field transformation with schema validation
  • Platform Website: fuuz.com

10. Revision History


Version Date Author Description
1.0 2025-01-01 Craig Scott Initial release - Complete guide covering all 7 transform nodes with configuration parameters, input/output specifications, use cases, and error handling patterns.
    • Related Articles

    • Data Flow Nodes Reference

      Fuuz Data Flow Nodes - Complete Reference Article Type: Reference Audience: Developers, App Admins, Solution Architects Module: Data Flows / Data Ops Applies to Versions: All 1. Overview The Fuuz Industrial Operations Platform provides a ...
    • Fuuz Platform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with GraphQL 1. Overview What are Fuuz Platform ...
    • Script & Validation Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with JSONata or JavaScript 1. Overview What are ...
    • Debugging & Context Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts 1. Overview What are Debugging & Context Nodes? Debugging ...
    • Transform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: None - This guide assumes no prior knowledge 1. Overview What are Transform Nodes? Transform nodes ...