Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: None - This guide assumes no prior knowledge
What are Transform Nodes?
Transform nodes convert data between formats and manipulate collections of records. They are the workhorses of data processing in Fuuz, enabling flows to accept data in one format (CSV from a legacy system), convert it to another format (JSON for processing), filter and group records, and output to yet another format (XML for an ERP system). All of this happens without writing custom code.
Why are they important?
Enterprise systems rarely speak the same language. Your ERP exports XML, your warehouse system expects CSV, your mobile app needs JSON, and your analytics platform wants aggregated summaries. Transform nodes bridge these gaps, enabling seamless data exchange without manual file manipulation or custom integration code.
| Node Type | Primary Purpose | Best For |
|---|---|---|
| JSON to CSV | Convert JSON array to comma-separated values string | Exports, legacy system integration, spreadsheet generation |
| CSV to JSON | Parse CSV string into JSON array of objects | File imports, batch data loading, external data ingestion |
| JSON to XML | Convert JSON object/array to XML document string | ERP integration, SOAP services, EDI transactions |
| XML to JSON | Parse XML document into JSON object | SAP integration, legacy SOAP APIs, XML file processing |
| Unique Array | Remove duplicate items from array based on field | Data deduplication, master data cleanup, merge operations |
| Filter Array | Select items matching condition expression | Business rule application, data selection, conditional processing |
| Group Array | Group items by field(s) with optional aggregation | Reporting, summaries, consolidation, analytics preparation |
All transform nodes support two additional transformation layers in Advanced Configuration:
The JSON to CSV node converts a JSON array of objects into a comma-separated values (CSV) string. Each object becomes a row, and object properties become columns. Essential for data export to spreadsheets, legacy systems, and any application expecting tabular text data.
| Parameter | Type | Default | Description |
|---|---|---|---|
| Field Delimiter | String | , (comma) | Character separating fields. Use semicolon (;) for European locales, tab (\t) for TSV. |
| Field Wrap | String | " (double quote) | Character used to wrap fields containing special characters. |
| End of Line | Enum | \n (Unix) | Line ending: \n (Unix/Mac), \r\n (Windows), \r (legacy Mac) |
| Prepend Header | Boolean | true | Include column headers as first row. |
| Keys | Array | All keys from first object | Explicit list of fields and their order. Unspecified fields excluded. |
| Expand Array Objects | Boolean | false | Flatten nested objects into separate columns (address.city becomes address_city). |
| Unwind Arrays | Boolean | false | Create multiple rows for nested arrays. Parent data repeated per array item. |
Input: JSON array of objects
[
{"orderNumber": "ORD-001", "customer": "ACME Manufacturing", "amount": 1500.00},
{"orderNumber": "ORD-002", "customer": "Beta Industries", "amount": 2750.50}
]
Output: CSV string
orderNumber,customer,amount
ORD-001,ACME Manufacturing,1500
ORD-002,Beta Industries,2750.5
Parses comma-separated values text into a JSON array of objects. Each row becomes an object, with column headers as property keys. Essential for importing data from spreadsheets, legacy exports, and tabular text sources.
| Parameter | Type | Default | Description |
|---|---|---|---|
| Field Delimiter | String | , (comma) | Character separating fields. Must match source file format. |
| Headers | Array/Boolean | true | true = first row is headers; Array = explicit header names to override file |
| Skip Lines | Integer | 0 | Lines to skip at beginning (for metadata rows before headers). |
| Dynamic Typing | Boolean | true | Automatically convert numbers and booleans from string representation. |
| Trim Values | Boolean | true | Remove leading/trailing whitespace from values. |
Converts JSON objects/arrays into XML document strings for integration with SAP IDocs, BAPI interfaces, SOAP web services, and EDI transactions.
| Parameter | Default | Description |
|---|---|---|
| Root Element | "root" | Name of root XML element wrapping entire document |
| Array Item Element | "item" | Element name for array items |
| Attribute Prefix | "@" | JSON keys starting with prefix become XML attributes (@id becomes id="...") |
| CDATA Fields | [] | Field names whose values wrap in CDATA sections (for HTML/XML content) |
| Include Declaration | true | Include <?xml version="1.0"?> declaration |
| Pretty Print | false | Format with indentation. Disable for production to reduce size. |
Parses XML documents into JSON objects. Essential for processing SAP IDocs, SOAP responses, and XML-based integrations.
| Parameter | Default | Description |
|---|---|---|
| Attribute Prefix | "@" | Prefix for JSON keys representing XML attributes |
| Text Node Name | "#text" | JSON key for element text content (mixed content) |
| Explicit Array | [] | Elements that should always be arrays, even with single item |
| Ignore Attributes | false | Discard all XML attributes (simpler output) |
Array manipulation nodes process collections of records - filtering, deduplicating, and grouping data for downstream operations.
Purpose: Removes duplicate records from an array based on one or more fields.
| Parameter | Type | Description |
|---|---|---|
| Unique Field | JSONata/String | Field or expression for uniqueness. Can combine fields: warehouseCode & "-" & partNumber |
| Keep | Enum | When duplicates found: First (keep earliest) or Last (keep most recent) |
Purpose: Selects array items matching a condition expression. Items where expression evaluates to true are included.
Filter Expression Examples:
Purpose: Groups items by field(s) with optional aggregation functions (sum, count, avg, min, max).
| Parameter | Type | Description |
|---|---|---|
| Group By | Array | Field name(s) to group by. Multiple fields create composite grouping. |
| Aggregations | Array | Each: {field, function, alias}. Functions: sum, count, avg, min, max |
| Include Items | Boolean | Include original items array within each group for drill-down access. |
Pattern 1: CSV Import Pipeline
File Source → CSV to JSON → Filter Array (remove invalid) → Unique Array (deduplicate) → Validate → Mutate
Pattern 2: Cross-System Data Exchange
Query (get data) → Filter Array (select relevant) → JSON to XML → HTTP Connector (send to SAP)
Pattern 3: Report Generation
Query (raw data) → Filter Array (date range) → Group Array (summarize) → JSON to CSV → System Email
| Symptom | Likely Cause | Resolution |
|---|---|---|
| CSV parse returns wrong columns | Delimiter mismatch or unquoted special characters | Verify Field Delimiter matches source file |
| XML element sometimes object, sometimes array | Element count varies between 1 and multiple | Add element name to Explicit Array parameter |
| Filter returns empty array | Expression syntax error or type mismatch | Add Echo node to inspect data. Check string vs number. |
| Group aggregations show NaN | Aggregating non-numeric field or null values | Filter out null values before grouping |
| [object Object] in CSV output | Nested structure not flattened | Enable Expand Array Objects or use Input Transform |
| Transform returns empty/undefined | Input not in expected format | Verify input type. JSON to CSV expects array. |
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0 | 2025-01-01 | Craig Scott | Initial release - Complete guide covering all 7 transform nodes with configuration parameters, input/output specifications, use cases, and error handling patterns. |