Data Model (Schema) Design Standards - Fuuz Application Designer

Data Model (Schema) Design Standards

Article Type: Standard / Reference
Audience: Solution Architects, Application Designers, Developers
Module: Fuuz Platform - Data Model Designer
Applies to Versions: 2025.12+

1. Overview

Data Model (Schema) Design Standards define the mandatory requirements and best practices for creating, extending, and maintaining data models in the Fuuz Platform. These standards ensure consistency across implementations, provide clear documentation for future maintenance, and optimize system performance through proper configuration of data retention, indexing, and API methods.

Enforcement Note: Depending on the platform version, some schema design standards may be automatically enforced by Fuuz, making it impossible to violate certain standards. Newer versions include validation checks that prevent deployment of non-compliant data models.

Application Accelerators

Fuuz provides several packages called Application Accelerators which include pre-built schemas, data flows, and screens relative to specific functionality. These accelerators provide industry-proven starting points for common use cases including manufacturing execution systems (MES), quality management, warehouse management, asset management, and more.

Accessing Application Accelerators:

  • Log into a Fuuz application tenant
  • Navigate to Fuuz Packages in the App Admin menu
  • Search the Fuuz Knowledge Base for "accelerators"
  • Multiple versions of packages are available to support different implementation approaches

Core Principles

  • Start with Standards: Always begin with an Application Accelerator package and extend as needed
  • Preserve Standards: Never delete standard schemas—others may need them in future implementations
  • Document Everything: Tables and columns require descriptions; documentation auto-generates API documentation
  • Search Before Creating: Check existing schemas before adding new ones to avoid duplication
  • Consider Contributing: When extending standards, consider suggesting improvements to standard packages

2. Architecture & Data Flow

Schema Extension Pattern

The recommended approach for schema design follows a structured extension pattern:

1. Install Application Accelerator Package

2. Review Standard Schemas

3. Add Custom Fields to Existing Tables

4. Add Custom Tables for Unique Requirements

5. Configure Data Retention (DCC + TTL)

6. Document All Extensions

7. Export Standard Integration Flows

Data Model Categories

Category Characteristics DCC Retention TTL
Master Data Products, Customers, Equipment, Locations—user-modifiable, low volume 120-365 days Not recommended
Transactional Data Work Orders, Production Records, Inventory Transactions—user-modifiable, moderate volume 30-120 days Based on regulatory requirements
IIoT/Sensor Data Tag Values, Sensor Readings—immutable, high volume (every second) Disabled 6-12 months + aggregation
Setup Data Status Types, Units, Categories—admin-modifiable, very low volume 365+ days Not recommended

Schema Designer Interface

[Placeholder: Schema Designer screenshot showing table structure, field configuration, relationships, and configuration options]

3. Use Cases

  • Extending MES Package: Start with standard MES accelerator; add custom fields for company-specific product attributes, customer classifications, or equipment specifications without modifying standard fields
  • Custom Integration Tables: Create custom tables to stage data from external systems before pushing to standard tables; maintain integration flows that map external data to Fuuz standard structures
  • IIoT Data Historian: Design high-volume sensor data tables with disabled Data Change Capture and TTL-based retention; create aggregation tables for hourly, shift, and daily summaries with longer retention
  • Quality Management Extension: Add inspection result tables, defect classification schemas, and corrective action tracking while leveraging standard product and work order structures
  • Custom Reporting Structures: Create denormalized reporting tables optimized for dashboard queries; use Data Flows to populate from normalized transactional data
  • Multi-Site Implementation: Extend location hierarchy with site-specific attributes; maintain consistent schema across sites while accommodating local requirements through custom fields
  • Regulatory Compliance Data: Add audit trail tables with extended retention periods (365+ days) to meet FDA 21 CFR Part 11, ISO 9001, or other regulatory requirements
  • Advanced Genealogy: Create custom relationship tables tracking material consumption, lot traceability, and component genealogy beyond standard bill of materials
  • Asset Maintenance Scheduling: Extend standard equipment models with maintenance interval tracking, spare parts management, and predictive maintenance scoring
  • Package Contribution: Export integration flows pushing standard data into standard tables (removing custom fields/tables) for potential inclusion in future Application Accelerator releases

4. Screen Details

Custom Schema Standards

Starting with Application Accelerators

  • Always start with a standard Application Accelerator package appropriate for your industry and use case
  • Add fields or tables as needed to support company-specific requirements
  • Consider suggesting changes to the standard package if extensions would benefit other customers

Schema Reuse and Preservation

  • Search for existing schemas before adding new schemas to avoid duplication
  • Never delete any standard schemas—someone else may need it in the future
  • Skip screen generation for schemas not being used rather than deleting them

Package Contribution

When creating standard integration flows (Data Flows) that push standard data into standard tables:

  • Example: Data Flow pushing basic customer data from ERP into standard Customer tables
  • Export a version with any custom fields or tables removed
  • Submit cleaned export to Fuuz for potential inclusion in future Application Accelerator packages

Documentation Standards

Documentation is auto-generated for the entire API model based on table and column descriptions:

Table Documentation

Each table must have a description including:

  • Purpose and use case of the table
  • Any triggers that fire on data changes
  • Unique indexing constraints
  • Exemptions from API methods (if mutations disabled)

Column Documentation

Each column must have a description including:

  • Purpose and usage of the field
  • Any default values
  • Exemptions from API methods (if field cannot be modified via API)
  • Expected data format or validation rules
Screen Integration: Some screens rely on data model documentation for field-level help. If field-level information is provided in the data model prior to building screens, screen field-level help is automatically populated but can be overridden in the screen designer.

Label Field

  • Every table must have a meaningful label field
  • The label field is generally the primary user-facing value
  • It may be the first data column in many cases when building table screens
  • In most cases that will NOT be ID—use name, code, description, or other human-readable identifier

Examples of Good Label Fields:

  • Product: productNumber or productName
  • Customer: customerName
  • Location: locationCode or locationName
  • Equipment: equipmentName or assetTag

Column Naming Standards

DateTime Fields

All DateTime fields must end with "At"

  • Examples: createdAt, recordedAt, updatedAt, measuredAt, startedAt, completedAt, scheduledAt

ID Fields (Foreign Keys)

All ID fields must reference the complete object name plus "Id"

  • Wrong: strategyId
  • Correct: productStrategyId
  • Use full object name to avoid ambiguity when multiple object types use similar terms

Reference Fields (Object References)

All reference fields need to reference the complete object name

  • Wrong: strategy
  • Correct: productStrategy
  • Ensures GraphQL queries are unambiguous and self-documenting

Boolean Fields

All booleans must be positively named to avoid double negatives

  • Correct: active is true (record is active)
  • Wrong: inactive is false (double negative)
  • Other examples: enabled, visible, required, approved, completed

Multiple Relationships Exception

Exceptions may exist when multiple links to the same object exist, but they must be well named and documented:

  • Examples: primaryProductStrategyId and secondaryProductStrategyId
  • Other examples: fromLocationId and toLocationId; createdByUserId and updatedByUserId

Back References

Back references are auto-generated reverse relationships. They can be customized:

  • Can be renamed if they provide more meaningful context on the relationships
  • Can be removed if they do not make sense or link to too much data
  • Removal example: inventoryIds back reference on InventoryStatus table—useful only for reverse queries showing all inventories of a specific status (rare use case)
  • Must be renamed to remove duplicate names with parent/child relations
  • Rename example: salesOrderSalesOrderLines can become salesOrderLines
Tip: If you're unsure about removing a back reference, just leave it. You can ignore back references during UI creation. There are very few use cases where removing back references provides significant benefit.

Grouping and Ordering

Sort each section to provide additional context to the importance or hierarchy of the data. Recommended field order:

  1. ID field (primary key, always first)
  2. Required fields (user must provide these values)
  3. Non-required fields (optional data fields)
  4. Pairs of required ID fields followed by the object reference (e.g., productId then product)
  5. Pairs of non-required ID fields followed by the object reference
  6. All back references (at the bottom)
Note: Grouping and ordering is up to the developer. There's an auto-sort option, or you can manually group fields. Some developers like to visually group all fields by type (all strings together, floats, relations, etc.). This can make editing and finding things simpler, but the extra effort may not be worth it. Once data models are created and related, focus shifts to flows and screens where you'll use GraphQL and JSONata to interact with the models.

5. Technical Details

Data Change Capture (DCC) Retention

Overview

Data Change Capture tracks all changes to records (add, update, delete events) for audit trails, data synchronization, and regulatory compliance. The default retention is 120 days.

When to Disable DCC

Disable Data Change Capture for high-volume transactional data, particularly IIoT-type data where:

  • Sensor data stores values in the primary table at high frequency (every second)
  • Values are immutable once written
  • No need to track changes because records never change after creation
  • Examples: Tag value historians, sensor readings, machine cycle data

When to Enable Long Retention

Enable long data retention when records need historical traceability:

  • Records that users modify or are otherwise not immutable
  • Regulatory compliance requirements (FDA, ISO, etc.)
  • Recommended: Lower retention for data changing on a regular basis to control storage costs

Retention Period Guidelines

Retention periods are usually customer-driven requirements. You can set the period in days per model to meet your requirements:

  • Setup Data / Master Data: 365+ days (low volume, regulatory requirements)
  • Transactional Data: 30-120 days (moderate volume, operational needs)
  • High-Volume IIoT: Disabled (use TTL instead for primary data retention)
  • Industry and regulatory requirements often govern proper retention periods

Performance Considerations

  • Increasing retention on lower-volume tables should NOT have an impact on system performance
  • If you experience performance issues and cannot reduce retention, contact Fuuz sales to upgrade your license/subscription

History Table Coordination

Critical: Data in tables with a corresponding history table must adjust Data Change Capture retention to match the history table retention pattern. Example: Current Tag Values table and Tag Value History table should have coordinated DCC retention settings.

TTL (Time To Live)

Overview

TTL (Time To Live) on the primary data model controls automatic deletion of old records. There is no default TTL on data in Fuuz. TTL is separate and distinct from Data Change Capture retention—DCC tracks history of changes while TTL deletes the primary records themselves.

When to Use TTL

  • Data retention must be taken into account for each table
  • TTL indexes must be added for data that does not need to be kept for long periods of time
  • Primary use case: High-volume IIoT/sensor data

IIoT Data Pattern with TTL

IIoT data (sensor data) typically doesn't need to be kept for more than 6 months to a year in most cases. Recommended pattern:

Real-Time Capture

Primary Table: Store every value every second
↓ (TTL: 6-12 months)
Aggregation Tables: Hourly, Shift, Daily summaries
↓ (TTL: 2-5 years or longer)
Data Lake/Historian: Long-term archival (optional)
  • Capture IIoT data in real-time and store every value every second
  • After 6 months you may only need data aggregated by hour, shift, or day
  • Create aggregation patterns to match your typical reporting methods
  • Create data models that support your aggregation methods
  • Export high-volume data to a data lake or historian if you need to retain it longer

Performance Impacts

There are performance impacts with TTL indexes. Properly configured TTL prevents database bloat from high-volume data while maintaining query performance on recent data. Balance retention needs against storage and query performance.

Mutations (API Methods)

Mutation Types

Mutations are the GraphQL API methods that modify data:

  • create: Add new records
  • update: Modify existing records
  • upsert: Create if doesn't exist, update if exists
  • delete: Remove records

Disabling Mutations

Important: Disable mutation methods with caution; consider security policies instead. It may not be beneficial to prevent everyone from updating a data model. Policies provide granular control based on user roles.
  • Prepare to explain any mutation edits during an internal review
  • If you prevent updates to an entire model, turn off the Data Change Capture option (no point tracking changes if changes can't happen)

When to Disable Mutations

You may disable mutations on tables with immutable data:

  • Tag value historians: Disable update, upsert, delete—records should never be modified after creation
  • Audit logs: Disable update, delete—maintain data integrity for compliance
  • Calculated results: Disable all mutations if data is generated by system calculations only

Impact of Disabling Mutations

Critical: If any mutations are disabled, this is disabled at the API layer—there would be NO way to modify the data through any interface (screens, Data Flows, external integrations). TTL and Data Change Capture retention still apply as they are system functions.

6. Resources

  • Fuuz Industrial Operations Platform
  • Application Accelerators Catalog
  • Data Model Designer Documentation
  • GraphQL API Reference
  • JSONata Expression Language
  • Custom Fields Documentation
  • Package Management Guide

7. Troubleshooting

  • Issue: Data model deployment blocked due to naming violations • Cause: DateTime fields not ending with "At"; ID fields not using complete object names; boolean fields negatively named • Fix: Review all field names against naming standards; rename DateTime fields to end with "At" (createdAt, measuredAt); rename ID fields to include complete object name (productStrategyId not strategyId); rename booleans to positive form (active not inactive)
  • Issue: Data model documentation incomplete or missing • Cause: Table descriptions not provided; column descriptions not provided; API method exemptions not documented • Fix: Add meaningful description to every table explaining purpose, triggers, unique constraints, mutation exemptions; add description to every column explaining purpose, default values, validation rules; document why any mutations are disabled
  • Issue: Database storage growing uncontrollably • Cause: No TTL configured on high-volume IIoT tables; DCC retention too long on high-volume transactional data • Fix: Add TTL indexes on IIoT sensor data tables (6-12 months); create aggregation tables with longer retention; disable DCC on immutable high-volume data; reduce DCC retention on moderate-volume transactional data; export aged data to data lake before deletion
  • Issue: Data Change History consuming excessive storage • Cause: DCC enabled on high-volume immutable data; retention period too long for volume • Fix: Disable DCC on IIoT sensor tables and other immutable high-volume data; reduce retention period on tables with frequent changes; coordinate DCC retention with history table retention settings
  • Issue: Users cannot modify data through screens or API • Cause: Mutations disabled on data model; security policies blocking access • Fix: Review mutation settings on data model—if update/upsert disabled, re-enable unless data is truly immutable; check security policies for proper role assignments; if mutations intentionally disabled, explain to users and provide alternative data modification process
  • Issue: Query performance degrading on large tables • Cause: No TTL causing table bloat; missing indexes on frequently queried fields; aggregation queries on raw data • Fix: Implement TTL to prevent unbounded growth; add database indexes on fields used in WHERE clauses and JOIN conditions; create aggregation tables for reporting rather than querying millions of raw records
  • Issue: GraphQL queries returning ambiguous errors • Cause: ID field names don't include complete object name; multiple relationships with unclear naming • Fix: Rename ID fields to use complete object names (productStrategyId not strategyId); rename multiple relationships with descriptive prefixes (primaryProductStrategyId, secondaryProductStrategyId); update all Data Flows and screens using old field names
  • Issue: Duplicate schemas created accidentally • Cause: Not searching for existing schemas before creating new ones • Fix: Always search Data Model Designer for existing schemas before creating new tables; review Application Accelerator package contents to understand what's already available; consolidate duplicate schemas by migrating data and updating references
  • Issue: Screen field help not displaying • Cause: Column descriptions not added to data model before screen generation • Fix: Add descriptions to all columns in data model; regenerate screens to pick up auto-populated field help; manually add help text in screen designer if needed
  • Issue: Label field not displaying properly in dropdowns • Cause: Label field set to ID field instead of human-readable field • Fix: Change label field to name, code, or description field; avoid using ID as label field; regenerate screens and SelectInputs to use new label field
  • Issue: Back references creating confusing query results • Cause: Auto-generated back reference names duplicating parent table name • Fix: Rename back references to remove redundancy (salesOrderLines not salesOrderSalesOrderLines); remove back references that link to excessive data or don't provide value; document back reference naming decisions
  • Issue: Cannot find standard schema after package installation • Cause: Screen generation skipped for unused tables; schema hidden in designer • Fix: Search Data Model Designer for schema name; standard schemas are never deleted, only hidden from default view; enable screen generation if schema will be used

8. Revision History


Version Date Editor Description
1.0 2024-12-31 Craig Scott Initial Release - Data Model (Schema) Design Standards
    • Related Articles

    • Data Flow Design Standards

      Article Type: Standard / Reference Audience: Solution Architects, Application Designers, Developers Module: Fuuz Platform - Data Flow Designer Applies to Versions: 2025.12+ 1. Overview Data Flow Design Standards define the mandatory requirements and ...
    • Fuuz Platform Nodes

      Article Type: Node Reference Audience: Developers, App Admins Module: Data Flows / Node Designer Applies to Versions: Platform 3.0+ Prerequisites: Basic understanding of Data Flow concepts, familiarity with GraphQL 1. Overview What are Fuuz Platform ...
    • Data Flow Nodes Reference

      Fuuz Data Flow Nodes - Complete Reference Article Type: Reference Audience: Developers, App Admins, Solution Architects Module: Data Flows / Data Ops Applies to Versions: All 1. Overview The Fuuz Industrial Operations Platform provides a ...
    • Master Data Table Screen Design Standard

      Master Data Table Screen Design Standard Article Type: Standard / Reference Audience: Solution Architects, Application Designers, Developers Module: Fuuz Platform - Application Designer Applies to Versions: 2025.12+ Template Reference: ...
    • Setup Data Table Screen Design Standard

      Article Type: Standard / Reference Audience: Solution Architects, Application Designers, Developers Module: Fuuz Platform - Application Designer Applies to Versions: 2025.12+ Template Reference: Table_Screen_Design_Template_-_Setup_0_0_1.json 1. ...