Article Type: Node Reference
Audience: Developers, App Admins
Module: Data Flows / Node Designer
Applies to Versions: Platform 3.0+
Prerequisites: None - This guide assumes no prior knowledge
What are Transform Nodes?
Transform nodes convert data between formats and manipulate collections of records. They are the workhorses of data processing in Fuuz, enabling flows to accept data in one format (CSV from a legacy system), convert it to another format (JSON for processing), filter and group records, and output to yet another format (XML for an ERP system). All of this happens without writing custom code.
Why are they important?
Enterprise systems rarely speak the same language. Your ERP exports XML, your warehouse system expects CSV, your mobile app needs JSON, and your analytics platform wants aggregated summaries. Transform nodes bridge these gaps, enabling seamless data exchange without manual file manipulation or custom integration code.
What you'll learn in this guide:
| Node Type | Primary Purpose | Best For |
|---|---|---|
| JSON to CSV | Convert JSON array to comma-separated values string | Exports, legacy system integration, spreadsheet generation |
| CSV to JSON | Parse CSV string into JSON array of objects | File imports, batch data loading, external data ingestion |
| JSON to XML | Convert JSON object/array to XML document string | ERP integration, SOAP services, EDI transactions |
| XML to JSON | Parse XML document into JSON object | SAP integration, legacy SOAP APIs, XML file processing |
| Unique Array | Remove duplicate items from array based on field | Data deduplication, master data cleanup, merge operations |
| Filter Array | Select items matching condition expression | Business rule application, data selection, conditional processing |
| Group Array | Group items by field(s) with optional aggregation | Reporting, summaries, consolidation, analytics preparation |
All transform nodes support two additional transformation layers available in Advanced Configuration:
The JSON to CSV node converts a JSON array of objects into a comma-separated values (CSV) string. Each object in the array becomes a row, and object properties become columns. This enables data export to spreadsheets, legacy systems, and any application expecting tabular text data.
When to use:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| Node Name | String | Yes | "JSON to CSV" | Display name in flow designer |
| Field Delimiter | String | No | , (comma) | Character separating fields. Use tab (\t) for TSV, semicolon (;) for European locales, pipe (|) for special cases. |
| Field Wrap | String | No | " (double quote) | Character used to wrap fields containing special characters. Applied automatically when field contains delimiter, newline, or wrap character. |
| End of Line | Enum | No | \n (Unix LF) | Line ending format: \n (Unix/Linux/Mac), \r\n (Windows), \r (legacy Mac) |
| Prepend Header | Boolean | No | true | Include column headers as first row. Set false for appending to existing files. |
| Keys | Array | No | All keys from first object | Explicit list of fields to include and their order. Unspecified fields are excluded. |
| Expand Array Objects | Boolean | No | false | Flatten nested objects into separate columns (e.g., address.city becomes address_city) |
| Unwind Arrays | Boolean | No | false | Create multiple rows for nested arrays. Each array item becomes a separate row with parent data repeated. |
| Input Transform | JSONata | No | null | Transform payload before CSV conversion (Advanced Configuration) |
| Output Transform | JSONata | No | null | Transform result after CSV conversion (Advanced Configuration) |
Input: JSON array of objects (each object becomes a row)
Example Input:
[
{
"orderNumber": "ORD-001",
"customer": "ACME Manufacturing",
"amount": 1500.00,
"status": "Shipped"
},
{
"orderNumber": "ORD-002",
"customer": "Beta Industries",
"amount": 2750.50,
"status": "Pending"
},
{
"orderNumber": "ORD-003",
"customer": "Gamma Corp",
"amount": 890.25,
"status": "Delivered"
}
]
Output: CSV string
orderNumber,customer,amount,status
ORD-001,ACME Manufacturing,1500,Shipped
ORD-002,Beta Industries,2750.5,Pending
ORD-003,Gamma Corp,890.25,Delivered
Output with Unwind Arrays (nested line items):
Input with nested array:
[
{
"orderNumber": "ORD-001",
"customer": "ACME Manufacturing",
"items": [
{"partNumber": "P-100", "quantity": 50},
{"partNumber": "P-200", "quantity": 25}
]
}
]
Output with Unwind Arrays = true:
orderNumber,customer,items.partNumber,items.quantity
ORD-001,ACME Manufacturing,P-100,50
ORD-001,ACME Manufacturing,P-200,25
Use Case 1: Daily Shipment Report for Logistics Partner (Basic)
A distribution company sends daily shipment manifests to their 3PL partner via SFTP. The partner's system requires CSV with specific column order and Windows line endings.
Configuration:
Use Case 2: SAP Material Master Export for German ERP (Intermediate)
A global manufacturer exports material master data to a German subsidiary running SAP. The German system expects semicolon delimiters and specific field naming.
Configuration:
Use Case 3: Production Order Export with Line Items (Advanced)
An automotive parts manufacturer exports production orders with multiple component lines to their legacy MRP system. Each order has multiple BOM components that must become separate rows while retaining parent order information.
Configuration:
Result: A work order with 15 BOM components becomes 15 CSV rows, each containing the parent work order info plus one component per row.
| Error Condition | Cause | Resolution Pattern |
|---|---|---|
| Empty output / no data | Input is not an array or is empty array | Add Fork node before to check array length. Use Input Transform to ensure array format. |
| Missing columns in output | First object missing fields present in later objects | Explicitly define Keys parameter to ensure all columns appear regardless of first object. |
| Special characters breaking import | Field contains delimiter, newline, or quote character | Field Wrap handles this automatically. Verify target system supports quoted fields. |
| Nested objects appearing as [object Object] | Complex nested structure not flattened | Enable Expand Array Objects OR use Input Transform to flatten before conversion. |
The CSV to JSON node parses comma-separated values text into a JSON array of objects. Each row becomes an object, with column headers (or specified field names) as property keys. This is essential for importing data from spreadsheets, legacy exports, and any tabular text source.
When to use:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| Node Name | String | Yes | "CSV to JSON" | Display name in flow designer |
| Field Delimiter | String | No | , (comma) | Character separating fields. Must match source file format. |
| Quote Character | String | No | " (double quote) | Character used to wrap fields containing special characters. |
| Escape Character | String | No | " (double quote) | Character used to escape quote characters within quoted fields. |
| Headers | Array or Boolean | No | true (use first row) | true = first row is headers; false = no headers (returns arrays); Array = explicit header names |
| Skip Empty Lines | Boolean | No | true | Ignore blank lines in source data. |
| Skip Lines | Integer | No | 0 | Number of lines to skip at beginning of file (for files with metadata rows before headers) |
| Dynamic Typing | Boolean | No | true | Automatically convert numbers and booleans from string representation. |
| Trim Values | Boolean | No | true | Remove leading/trailing whitespace from values. |
| Input Transform | JSONata | No | null | Transform payload before parsing (extract CSV string from larger payload) |
| Output Transform | JSONata | No | null | Transform result after parsing (restructure, rename fields, add metadata) |
Input: CSV string (typically from File Source node or payload field)
Example Input:
partNumber,description,unitPrice,warehouse,quantityOnHand
P-100,"Bearing, Ball 6205",15.50,EAST,250
P-101,"Seal, Oil Type A",8.25,EAST,500
P-102,"Gasket Set, Complete",45.00,WEST,75
Output: JSON array of objects
[
{
"partNumber": "P-100",
"description": "Bearing, Ball 6205",
"unitPrice": 15.50,
"warehouse": "EAST",
"quantityOnHand": 250
},
{
"partNumber": "P-101",
"description": "Seal, Oil Type A",
"unitPrice": 8.25,
"warehouse": "EAST",
"quantityOnHand": 500
},
{
"partNumber": "P-102",
"description": "Gasket Set, Complete",
"unitPrice": 45.00,
"warehouse": "WEST",
"quantityOnHand": 75
}
]
Use Case 1: Daily Inventory Import from Legacy WMS (Basic)
A manufacturing plant receives nightly inventory snapshots from their legacy warehouse management system via CSV file drop. The flow parses the CSV and updates Fuuz inventory records.
Configuration:
Use Case 2: Vendor Price Update with Skip Lines (Intermediate)
A distributor receives weekly price updates from suppliers in files that include 3 metadata rows (supplier name, date, terms) before the column headers. The flow must skip these rows to properly parse pricing data.
Input File:
SUPPLIER: ACME Parts Inc.
DATE: 2024-01-15
TERMS: Net 30
SKU,Description,ListPrice,YourPrice
A100,Widget Standard,25.00,18.75
A101,Widget Deluxe,35.00,26.25
Configuration:
Use Case 3: Multi-Format Import with Header Mapping (Advanced)
An automotive manufacturer receives parts data from multiple suppliers, each using different column names for the same data. One uses "PartNo", another uses "Item", and a third uses "SKU". All must map to the standardized "partNumber" field.
Configuration:
Result: Regardless of source file column names, output always uses standardized field names for consistent downstream processing.
| Error Condition | Cause | Resolution Pattern |
|---|---|---|
| Wrong number of columns per row | Unquoted delimiter within field value | Verify Quote Character setting matches source. Check for inconsistent quoting in source data. |
| Numbers appearing as strings | Dynamic Typing disabled or unusual number format | Enable Dynamic Typing OR use Output Transform to convert specific fields. |
| Empty or missing fields | Source data has blank values between delimiters | Expected behavior - empty values become empty strings or null. Use Filter Array to remove invalid records. |
| Encoding issues (garbled characters) | Source file uses different character encoding (Latin-1, Windows-1252) | Pre-process file to convert to UTF-8 OR use Input Transform to handle specific characters. |
The JSON to XML node converts JSON objects or arrays into XML document strings. This enables integration with enterprise systems that require XML input, including SAP, Oracle, and legacy SOAP web services. The node handles element naming, attribute conversion, and proper XML structure generation.
When to use:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| Node Name | String | Yes | "JSON to XML" | Display name in flow designer |
| Root Element | String | No | "root" | Name of the root XML element wrapping the entire document. |
| Array Item Element | String | No | "item" | Element name for array items when array is converted to XML. |
| Attribute Prefix | String | No | "@" | JSON keys starting with this prefix become XML attributes instead of elements. |
| Text Node Name | String | No | "#text" | JSON key that becomes text content of parent element (mixed content). |
| CDATA Fields | Array | No | [] | Field names whose values should be wrapped in CDATA sections (for HTML/XML content). |
| Include Declaration | Boolean | No | true | Include <?xml version="1.0"?> declaration at beginning. |
| Pretty Print | Boolean | No | false | Format output with indentation for readability. Disable for production to reduce size. |
Input: JSON object or array
Example Input (with attribute prefix):
{
"order": {
"@id": "ORD-12345",
"@status": "Pending",
"customer": "ACME Manufacturing",
"items": [
{"partNumber": "P-100", "quantity": 50},
{"partNumber": "P-200", "quantity": 25}
]
}
}Output: XML string
<?xml version="1.0" encoding="UTF-8"?>
<root>
<order id="ORD-12345" status="Pending">
<customer>ACME Manufacturing</customer>
<items>
<item>
<partNumber>P-100</partNumber>
<quantity>50</quantity>
</item>
<item>
<partNumber>P-200</partNumber>
<quantity>25</quantity>
</item>
</items>
</order>
</root>
Use Case 1: SAP IDoc Generation (Basic)
A manufacturer sends production orders to SAP via IDoc interface. The flow transforms internal order data into SAP's required XML structure with proper segment naming and attributes.
Configuration:
Use Case 2: EDI Trading Partner Integration (Intermediate)
A distributor exchanges purchase orders with retail partners using XML-based EDI. Each partner requires slightly different XML structures. Input Transforms reshape data to match partner-specific requirements before XML conversion.
Use Case 3: SOAP Web Service Call with Complex Payload (Advanced)
A logistics company integrates with a carrier's SOAP API for shipment booking. The API requires a complex nested XML structure with namespaces, attributes, and CDATA sections for special instructions.
Configuration:
| Error Condition | Cause | Resolution Pattern |
|---|---|---|
| Invalid XML element names | JSON keys contain spaces or invalid XML characters | Use Input Transform to rename keys to valid XML names before conversion. |
| Special characters in content | Content contains <, >, &, or quotes | Automatically escaped by default. For HTML content, add field to CDATA Fields list. |
| Target system rejects XML | Missing required elements or wrong structure | Compare output with target system's XSD schema. Use Input Transform to ensure required structure. |
The XML to JSON node parses XML documents into JSON objects. Attributes, elements, text content, and CDATA sections are converted to their JSON equivalents. This is essential for processing data received from SAP, legacy SOAP services, and XML-based integrations.
When to use:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| Node Name | String | Yes | "XML to JSON" | Display name in flow designer |
| Attribute Prefix | String | No | "@" | Prefix added to JSON keys for XML attributes. |
| Text Node Name | String | No | "#text" | JSON key for element text content (when element has both attributes and text). |
| Merge Attributes | Boolean | No | false | Merge attributes directly into parent object without prefix (flatter structure). |
| Explicit Array | Array | No | [] | Element names that should always be arrays, even with single item. |
| Ignore Attributes | Boolean | No | false | Discard all XML attributes (simpler output but loses attribute data). |
| Trim Text | Boolean | No | true | Remove whitespace around text content. |
Input: XML string
Example Input:
<order id="ORD-12345" status="Shipped">
<customer type="business">ACME Manufacturing</customer>
<lineItem>
<partNumber>P-100</partNumber>
<quantity>50</quantity>
</lineItem>
<lineItem>
<partNumber>P-200</partNumber>
<quantity>25</quantity>
</lineItem>
</order>
Output: JSON object
{
"order": {
"@id": "ORD-12345",
"@status": "Shipped",
"customer": {
"@type": "business",
"#text": "ACME Manufacturing"
},
"lineItem": [
{
"partNumber": "P-100",
"quantity": "50"
},
{
"partNumber": "P-200",
"quantity": "25"
}
]
}
}Use Case 1: SAP IDoc Processing (Basic)
A manufacturer receives material master updates from SAP via IDoc XML. The flow parses the IDoc, extracts relevant segments, and updates local inventory records.
Configuration:
Use Case 2: SOAP Response Processing (Intermediate)
A logistics company calls a carrier's SOAP API for rate quotes. The XML response contains namespaces and complex nested structures that must be parsed and simplified for internal use.
Use Case 3: EDI Document Translation (Advanced)
A retailer receives purchase orders in XML format from multiple trading partners. Each partner uses slightly different XML structures. The flow normalizes all formats into a standard JSON structure for consistent processing.
| Error Condition | Cause | Resolution Pattern |
|---|---|---|
| Sometimes array, sometimes object | XML element count varies between 1 and many | Add element name to Explicit Array list to always return array. |
| Malformed XML error | Invalid XML structure (unclosed tags, invalid characters) | Wrap in Try/Catch. Log failed XML for investigation. Return error to source system. |
| Namespace prefixes in keys | XML uses namespaces (ns:elementName) | Use Output Transform to remove namespace prefixes from keys. |
Array manipulation nodes process collections of records - filtering, deduplicating, and grouping data for downstream operations. These nodes work on JSON arrays and are essential for data quality, business rule application, and report generation.
Purpose: Removes duplicate records from an array based on one or more fields. When duplicates are found, keeps either the first or last occurrence.
Configuration Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
| Unique Field | JSONata / String | Required | Field or expression to determine uniqueness. Can be simple field name or complex expression combining multiple fields. |
| Keep | Enum | First | When duplicates found: First (keep earliest), Last (keep most recent) |
Input Example:
[
{"customerId": "C001", "name": "ACME Corp", "lastOrder": "2024-01-10"},
{"customerId": "C002", "name": "Beta Inc", "lastOrder": "2024-01-12"},
{"customerId": "C001", "name": "ACME Corporation", "lastOrder": "2024-01-15"},
{"customerId": "C003", "name": "Gamma LLC", "lastOrder": "2024-01-14"}
]
Output (Unique Field = customerId, Keep = Last):
[
{"customerId": "C002", "name": "Beta Inc", "lastOrder": "2024-01-12"},
{"customerId": "C001", "name": "ACME Corporation", "lastOrder": "2024-01-15"},
{"customerId": "C003", "name": "Gamma LLC", "lastOrder": "2024-01-14"}
]
Use Cases:
Purpose: Selects array items matching a condition expression. Items where the expression evaluates to true are included in output; others are excluded.
Configuration Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
| Filter Expression | JSONata | Required | Expression evaluated for each item. Return true to include, false to exclude. |
Input Example:
[
{"orderId": "ORD-001", "status": "Pending", "amount": 500},
{"orderId": "ORD-002", "status": "Shipped", "amount": 1500},
{"orderId": "ORD-003", "status": "Pending", "amount": 250},
{"orderId": "ORD-004", "status": "Delivered", "amount": 3000}
]
Output (Filter Expression = status = "Pending"):
[
{"orderId": "ORD-001", "status": "Pending", "amount": 500},
{"orderId": "ORD-003", "status": "Pending", "amount": 250}
]
Use Cases:
Purpose: Groups array items by one or more fields and optionally applies aggregation functions (sum, count, average, min, max) to numeric fields within each group.
Configuration Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
| Group By | Array of Strings | Required | Field name(s) to group by. Multiple fields create composite grouping. |
| Aggregations | Array of Objects | [] | Each object: {field, function, alias}. Functions: sum, count, avg, min, max |
| Include Items | Boolean | false | Include array of original items within each group (for drill-down access). |
Input Example:
[
{"warehouse": "EAST", "product": "Widget-A", "quantity": 100, "value": 500},
{"warehouse": "EAST", "product": "Widget-B", "quantity": 50, "value": 250},
{"warehouse": "WEST", "product": "Widget-A", "quantity": 75, "value": 375},
{"warehouse": "EAST", "product": "Widget-A", "quantity": 25, "value": 125}
]
Configuration: Group By = ["warehouse", "product"], Aggregations = [{field: "quantity", function: "sum", alias: "totalQty"}, {field: "value", function: "sum", alias: "totalValue"}]
Output:
[
{"warehouse": "EAST", "product": "Widget-A", "totalQty": 125, "totalValue": 625},
{"warehouse": "EAST", "product": "Widget-B", "totalQty": 50, "totalValue": 250},
{"warehouse": "WEST", "product": "Widget-A", "totalQty": 75, "totalValue": 375}
]
Real-World Use Cases:
Use Case 1: Daily Production Summary (Basic)
A manufacturing plant summarizes daily production by production line. Group By = ["productionLine"], Aggregations = sum of "unitsProduced", count of records, avg of "cycleTime".
Use Case 2: Customer Order Analysis (Intermediate)
An e-commerce company analyzes orders by customer and product category. Group By = ["customerId", "category"], Include Items = true for drill-down to individual orders.
Use Case 3: Inventory Valuation by Location and Status (Advanced)
A pharmaceutical distributor values inventory by warehouse, product, and lot status (quarantine, released, expired). Multiple aggregations calculate total quantity, total value, and average days to expiry per group.
Pattern 1: CSV Import Pipeline
File Source → CSV to JSON → Filter Array (remove invalid) → Unique Array (deduplicate) → Validate → Mutate (database insert)
Pattern 2: Cross-System Data Exchange
Query (get data) → Filter Array (select relevant) → JSON to XML → HTTP Connector (send to SAP)
Pattern 3: Report Generation
Query (raw data) → Filter Array (date range) → Group Array (summarize) → JSON to CSV → System Email (attach report)
| Symptom | Likely Cause | Resolution |
|---|---|---|
| CSV parse returns wrong number of columns | Delimiter mismatch or unquoted special characters | Verify Field Delimiter matches source. Check Quote Character setting. |
| XML element sometimes object, sometimes array | Element count varies between 1 and multiple | Add element name to Explicit Array parameter. |
| Filter returns empty array unexpectedly | Expression syntax error or data type mismatch | Add Echo node before filter to inspect data. Check string vs number comparisons. |
| Group aggregations showing NaN or null | Aggregating non-numeric field or null values | Verify field contains numbers. Filter out null values before grouping. |
| Nested objects showing as [object Object] in CSV | Complex nested structure not flattened | Enable Expand Array Objects OR use Input Transform to flatten. |
| Transform returns undefined or empty | Input not in expected format (object vs array) | Verify input type matches node expectations. JSON to CSV expects array. |