Article Type: Standard / Reference
Audience: Solution Architects, Application Designers, Developers
Module: Fuuz Platform - Application Designer
Applies to Versions: 2025.12+
Template Reference: Table_Screen_Design_Template_-_History_0_0_1.json
1. Overview
Historical Data table screens provide specialized interfaces for viewing, filtering, and analyzing large volumes of completed operational records. This design standard defines a comprehensive framework for building filter-and-table screens optimized for Historical Data, which differs significantly from Master Data and Transactional Data due to its immutable nature, extreme volume characteristics, and read-only access patterns. Historical Data represents completed operations, process outputs, time-series measurements, and archived business events that must be preserved for reporting, analysis, compliance, and auditing purposes.
What is Historical Data?
Historical Data represents operational records that have been completed or are the result of process outputs. These records are typically immutable (no editing after completion), high-volume (potentially millions of records), and optimized for time-series analysis, trend identification, and compliance reporting. Unlike Master Data (which changes infrequently) or Transactional Data (which has active lifecycle states), Historical Data is primarily consumed for retrospective analysis rather than ongoing operational management.
Examples of Historical Data:
- Production History: Completed production runs, output quantities, yield rates, cycle times, quality metrics
- Workcenter History: Machine utilization, downtime events, changeover durations, OEE calculations
- IoT Tag Historian: Time-series sensor data, process variables, temperature logs, pressure readings
- Quality History: Inspection results, test measurements, non-conformance records, corrective action outcomes
- Maintenance History: Completed work orders, PM activities, repair records, parts consumption
- Inventory History: Stock movements, transaction logs, location transfers, adjustments
- Order Fulfillment History: Completed orders, shipment records, delivery confirmations
- Energy Consumption History: Utility usage, power consumption, cost allocations by period
Design Principles
- Read-Only First: Historical Data screens are optimized for viewing and analysis, not editing
- Filter-Dependent Loading: Never auto-load; always require date range or other filters to manage volume
- Time-Series Aware: Date/time filtering is paramount; default to recent periods (last 7 days, last month)
- Performance Optimized: Minimize transforms, calculations, and row actions; leverage aggregations and summaries
- Export-Friendly: Provide robust export capabilities for external analysis in BI tools or spreadsheets
Historical Data Characteristics
| Feature |
Historical Data Configuration |
| Auto-Search (autoLoad) |
✗ Almost never - datasets too large, requires date filters |
| Image Columns |
✗ Not recommended - performance impact on large datasets |
| Wiki Pages |
✓ May have - for analysis procedures, metric definitions |
| Mass Delete |
✗ Never - historical records must be preserved for compliance/audit |
| Mass Update |
✗ Never - completed records are immutable |
| Filter Complexity |
Extensive - 4+ filters required (date range, location, type, status) |
| End User Exposure |
✓ Always - operators, supervisors, analysts all need historical visibility |
| Edit Pattern |
View-only or minimal corrections (typos, classification errors) |
| Default Sort |
Timestamp descending (most recent first) |
2. Design Framework
Key Differences from Master Data
- No Auto-Load: Historical tables must wait for filter input before querying
- Date-First Filtering: Date range filter is MANDATORY and must be first in filter panel
- Export-Centric: Export is a PRIMARY action, not secondary
- Read-Only Pattern: No Create, Edit, Delete actions on historical records
- No Subscriptions: Data doesn't change real-time; no need for live updates
- Grouping for Analysis: Enable grouping to support aggregation and analysis workflows
- Status Bar Emphasis: Always show record count to confirm reasonable result set
3. Use Cases
- Production Analysis: Review completed production runs for yield analysis, cycle time trends, and quality performance over time periods.
- Equipment Performance: Analyze machine utilization history, downtime patterns, and maintenance effectiveness across shifts, days, or months.
- Quality Trending: Track inspection results, defect rates, and process capability over time to identify systematic issues or improvements.
- Regulatory Compliance: Provide auditors with complete, unalterable records of operations, measurements, and decisions for FDA, ISO, or industry compliance.
- Root Cause Analysis: Investigate specific time periods or events by filtering historical data to understand what happened when issues occurred.
- Performance Benchmarking: Compare current operations against historical baselines or best-practice periods to identify improvement opportunities.
- Energy Management: Analyze utility consumption patterns, cost allocations, and identify opportunities for energy efficiency improvements.
- Capacity Planning: Review historical throughput, utilization, and demand patterns to support expansion or optimization decisions.
- Cost Analysis: Examine labor hours, material consumption, and overhead allocation across products, operations, or time periods.
- External Reporting: Export historical data for business intelligence tools, executive dashboards, or regulatory submissions.
4. Screen Details
Auto-Search Configuration
NEVER enable auto-search (autoLoad: false) for Historical Data tables. The volume is too large and performance will be unacceptable.
{
"query": {
"autoLoad": false // ALWAYS false for Historical Data
}
}
Critical: Historical Data tables can contain millions of records. Auto-loading would crash browsers or timeout. Users MUST filter to reasonable result sets (thousands, not millions) before viewing data.
Filter Panel Standards
Filter Organization for Historical Data
Historical Data filters MUST be organized in this specific order with date range FIRST:
- Date Range Filter (REQUIRED): Start Date and End Date or relative period selector (Last 7 Days, Last Month, etc.)
- Location/Workcenter Filter: Where the historical event occurred
- Product/Material Filter: What was produced, consumed, or measured
- Status/Type Filter: Completion status, event classification
- Additional SelectInputs (2-3 max): Shift, operator, equipment, order number
- Switch Filters (minimal): Failed Only, Exceptions Only
- GraphQL Advanced Filters (last): For complex metric-based queries
Required Date Range Filter
Historical Data screens MUST include a prominent date range filter:
- Use DateInput with range: true for start/end date selection
- OR provide SelectInput with relative periods: Last 7 Days, Last 30 Days, Last Month, Last Quarter, Year to Date
- Default to recent period (Last 7 Days or Last 30 Days) to prevent excessive queries
- Display date range prominently with warning styling if needed
- Consider enforcing maximum date range (e.g., no more than 90 days) for very high-volume datasets
Filter Actions
| Action |
Icon |
Transformation |
| Search |
magnifying-glass |
$components.Table1.fn.search() |
| Clear/Reset |
eraser |
$components.FormFilterPanel.fn.reset() + set default date range |

Table Configuration
Essential Table Properties for Historical Data
{
"width": "100%",
"height": "0px",
"flexGrow": true,
"selectable": "single", // Multiple selection not needed for read-only
"showToolPanels": true,
"showStatusBar": true, // CRITICAL - shows record count
"enableGrouping": true, // Essential for analysis
"enableCharts": false,
"showColumnMenu": false,
"enableRowDragging": false,
"disableColumnDragging": true,
"disableDragLeaveHidesColumns": true,
"masterDetail": false,
"hideTableHeader": false,
"rowMultiSelectWithClick": false
}Query Configuration
{
"query": {
"api": "Application",
"model": "YourHistoricalDataModel",
"dataPath": "edges",
"autoLoad": false, // NEVER true
"fields": ["id"],
"dataSubscription": { "enabled": false } // Historical data doesn't change
}
}Default Sort
Always sort by timestamp descending (most recent first):
{
"defaultSort": [
{ "field": "TableColumnTimestamp", "direction": "desc" }
]
}Column Standards
Column Layout Order for Historical Data
Historical Data table screens follow strict left-to-right column order:
- Selection Column (optional) — 40px
- Actions Menu Column (View Details only) — 40px
- NO IMAGE COLUMNS (performance impact)
- Primary Identifier Column (Run ID, Event ID, Record Number with linkTarget to detail screen)
- Timestamp Column (When the event occurred - prominently placed)
- Context Columns (Location, Product, Equipment, Operator)
- Metric Columns (Quantities, rates, durations, percentages)
- Status/Classification Columns (Completion status, result classification)
- NO AUDIT COLUMNS (Historical data doesn't have UpdatedAt/UpdatedBy)
Timestamp Column (Critical)
Timestamp must be prominent and sortable:
{
"label": "Timestamp",
"format": "datetime",
"sortable": true,
"dataPath": "timestamp",
"width": "160px",
"timeZone": "setting",
"pinned": "left" // Consider pinning for visibility during scrolling
}Action Patterns
| Action |
Icon |
Color |
Notes |
| Export Data |
file-export |
primary |
PRIMARY action - export to Excel, CSV for external analysis |
| Refresh |
arrows-rotate |
info |
Re-query with current filters |
| Wiki View |
book-open-cover |
info |
Optional - analysis procedures, metric definitions |
| Generate Report |
chart-line |
success |
Optional - launch pre-built analysis report |
NO CREATE, NO DELETE, NO MASS UPDATE ACTIONS
{
"label": " ",
"format": "menu",
"sortable": false,
"dataPath": "actions",
"width": "40px",
"menuIcon": "ellipsis-v",
"iconSize": "small",
"showColumnSuppressMenu": false,
"actions": [
{ "text": "View Details", "icon": { "icon": "circle-info", "color": "info" } },
{ "text": "View Related", "icon": { "icon": "diagram-project", "color": "info" } }
// NO EDIT, NO DELETE actions
]
}5. Technical Details
- Eliminate Row-Level Transforms: NEVER use transforms on individual rows. All calculations must be pre-computed in data model or backend.
- Minimize Action Buttons: Row actions add significant overhead. Limit to View Details only, remove Edit/Delete entirely.
- Server-Side Aggregation: Use GraphQL aggregation queries for summaries (totals, averages, counts) rather than client-side calculation.
- Pagination: Implement server-side pagination with page sizes of 50-100 records max for very large result sets.
- Indexed Queries: Ensure database indexes exist on timestamp, location, product, status fields used in filters.
- Column Virtualization: Load only visible columns initially; use tool panel for additional columns as needed.
- Data Archival: Move aged historical data (older than retention period) to separate archive tables or cold storage.
Result Set Size Management
Target Result Set: Aim for 1,000 - 10,000 records per query. If users need more, guide them to use Export functionality rather than viewing in browser. Consider enforcing maximum result set limits (e.g., 25,000 records) with messaging: "Too many results. Please narrow your filters or use Export for large datasets."
When historical data requires computed columns (such as calculated totals, percentages, or derived metrics), use table-level dataTransform rather than column-level transforms. Table-level transforms execute once per query result set, whereas column-level transforms execute per-row per-column, causing significant performance degradation on large datasets.
Critical: When using table-level dataTransform, you MUST ensure two things:
1. Every resultant row must have an "id" property. The table component requires an id field on each row for selection, actions, and internal state management. If your transform strips or fails to preserve the id, you will receive an error: "One or more rows do not have an id property."
2. All fields used in the transform must be included in the Additional Query Fields. The transform executes after the GraphQL query returns data. If your transform references fields not included in the query, those fields will be undefined and calculations will fail.
Use the Fuuz JSONata merge operator ~>|$|{...}| to add computed fields while preserving existing row data including the id:
{
"query": {
"api": "Application",
"model": "ProductionLog",
"dataPath": "edges",
"dataTransform": {
"transform": "[edges.node~>|$|(
$tot := $coalesce([quantityGood, 0]) + $coalesce([quantityScrap, 0]);
{
\"totalQuantity\": $tot,
\"yieldPercent\": $tot > 0 ? $string($round(quantityGood / $tot * 100, 1)) & \"%\" : \"0%\"
}
)|]",
"remote": true
},
"autoLoad": false,
"fields": [
"id",
"occurAt",
"quantityGood",
"quantityScrap"
]
}
}

| Element |
Purpose |
[edges.node~>|$|{...}|] |
Maps over edges.node array, merging computed fields into each existing node object (preserves id and all other fields) |
~>|$|{...}| |
JSONata transform operator - merges the object in {braces} into the current context ($) |
$coalesce([field, 0]) |
Null-safe field access - returns 0 if field is null/undefined, preventing NaN in calculations |
"remote": true |
Executes transform server-side for better performance on large datasets |
Column Configuration for Computed Fields
Columns that display computed values simply reference the new field via dataPath - no column-level transform needed:
{
"label": "Total Qty",
"format": "number",
"sortable": true,
"dataPath": "totalQuantity", // References field created by table transform
"description": "Computed: quantityGood + quantityScrap"
},
{
"label": "Yield %",
"format": "text",
"sortable": true,
"dataPath": "yieldPercent", // References field created by table transform
"description": "Computed: quantityGood / totalQuantity * 100"
}Duration Calculation (minutes from milliseconds):
"transform": "[edges.node~>|$|{\"durationMinutes\": $round(duration / 60000, 1)}|]"OEE Calculation:
"transform": "[edges.node~>|$|(
$oee := availability * performance * quality;
{\"oeePercent\": $string($round($oee * 100, 1)) & \"%\"}
)|]"
Full Name Concatenation:
"transform": "[edges.node~>|$|{\"operatorName\": operator.firstName & \" \" & operator.lastName}|]"
Best Practice: When possible, pre-compute calculated fields in the data model or during data ingestion via Data Flows. Table transforms should only be used for display formatting or calculations that must be dynamic based on current context. While you can make this work in the column transforms, that will cause every value in every column to be computed as your table data renders - by doing table level transforms, that data is transformed immediately upon response from the API prior to loading the table.
Data Archival Strategy
Historical Data tables grow indefinitely. Implement archival strategy:
- Hot Data: Last 90-180 days in primary table for frequent access
- Warm Data: 6-24 months in secondary table for occasional access
- Cold Data: Older than 2 years in archive table or exported to data warehouse
- Compliance Retention: Meet regulatory requirements (FDA 21 CFR Part 11, ISO, etc.) before archiving
- Scheduled Jobs: Use Data Flows to automatically move aged data on monthly/quarterly schedule
Naming Conventions
Use same naming conventions as Master Data tables, with emphasis on "History" suffix:
- Screen Names: ProductionHistory, WorkcenterHistory, QualityHistory
- Table Names: Table1 (maintains consistency)
- Filter Form: FormFilterPanel (maintains consistency)
- Export Action: ActionExportData (highlight export importance)
6. Resources
-
Fuuz Industrial Operations Platform
-
Master Data Table Screen Design Standard
-
Application Designer Documentation
-
Data Model Performance Optimization
-
Data Archival Best Practices
-
Table_Screen_Design_Template_-_History_0_0_1.json
7. Troubleshooting
- Issue: Table query times out or is very slow • Cause: Result set too large, missing database indexes, or row transforms present • Fix: Require more restrictive filters (shorter date range, specific location); verify database indexes on timestamp, location, status; remove ALL row-level transforms; implement server-side pagination
- Issue: Browser crashes or freezes when loading data • Cause: Attempting to render too many records (50,000+) • Fix: Implement result set limit (e.g., 25,000 max); show error message: "Too many results, please narrow filters"; guide users to Export for large datasets
- Issue: Users complain they can't find old historical data • Cause: Data has been archived or default date range too restrictive • Fix: Provide clear documentation on archival policy; add "Search Archive" option that queries separate archive table; consider "All Time" filter option with performance warning
- Issue: Date range filter not limiting results • Cause: Filter dataPath doesn't match query parameter or transform incorrect • Fix: Verify date filter dataPath maps to query parameters; check GraphQL query includes proper timestamp filtering; ensure date format matches API expectations (ISO 8601)
- Issue: Export doesn't include all filtered records • Cause: Export function only exports visible/paginated records • Fix: Implement proper export that queries all matching records (not just current page); use server-side export generation for very large exports; provide progress indicator for large exports
- Issue: Timestamps showing in wrong timezone • Cause: timeZone not set to "setting" • Fix: Set all datetime columns timeZone: "setting" to use tenant-configured timezone; verify timezone setting in application configuration
- Issue: Historical data appears to be missing or incomplete • Cause: Data archival, retention policy deletion, or historical records never created • Fix: Check archival logs; verify retention policies haven't expired needed data; investigate source systems/Data Flows creating historical records; check for gaps in data capture
- Issue: Cannot sort by timestamp column • Cause: Timestamp column sortable: false or field not in query • Fix: Set timestamp column sortable: true; ensure timestamp field included in GraphQL query fields; verify default sort configured on timestamp descending
- Issue: Status bar doesn't show record count • Cause: showStatusBar: false or status bar disabled • Fix: Set table showStatusBar: true; status bar is CRITICAL for historical data to show users result set size
- Issue: Aggregation totals don't match detail records • Cause: Aggregation query includes archived data or different filters than detail query • Fix: Ensure aggregation and detail queries use identical filter parameters; verify aggregation query excludes archived data if detail view does; check for rounding differences in calculations
- Issue: "One or more rows do not have an id property" error • Cause: Table dataTransform does not preserve the id field from the original row data • Fix: Use the merge operator pattern
[edges.node~>|$|{...}|] which merges computed fields into existing row objects, preserving id and all other fields; verify "id" is included in the table's Additional Query Fields - Issue: Computed columns show undefined or NaN values • Cause: Fields used in dataTransform are not included in the query • Fix: Add ALL fields referenced in the transform to the Additional Query Fields array; use $coalesce([field, 0]) for null-safe arithmetic
- Issue: Table transform causes slow rendering • Cause: Complex transform logic or transform not set to remote execution • Fix: Set "remote": true in dataTransform to execute server-side; simplify transform logic; consider pre-computing values in data model instead
8. Revision History
| Version |
Date |
Editor |
Description |
| 1.0 |
2024-12-26 |
Craig Scott |
Initial Release - Historical Data Table Screen Design Standard |
| 1.1 |
2025-01-05 |
Craig Scott |
Added Table Data Transforms section with id preservation requirements and Additional Query Fields guidance |