Core Concepts & Glossary
This article defines the fundamental concepts, entities, and terminology used throughout SuperSync. Whether you are configuring an integration, building a new platform adapter, or troubleshooting a sync, understanding these building blocks is essential before diving into any other documentation.
System Overview
SuperSync is an integration and data synchronization platform that moves records between two external systems — referred to as platforms — using a configurable pipeline of stages. A typical use case is syncing orders from an e-commerce platform (e.g., Shopify) into an ERP (e.g., NetSuite), but the system is designed to support any combination of source and destination platforms.
SuperSync does not store the source data itself permanently; it acts as a translation and routing layer. All execution history, record-level results, and logs are tracked to provide full observability into every sync that runs.
Core Entity Hierarchy
Understanding how entities nest inside each other is the fastest way to orient yourself in the system:
Account └── Platform Instance(s) ← A connected, credentialed copy of a platform │ └── Platform ← The platform type (Shopify, NetSuite, etc.) │
└── Flow Mapping(s) ← An active, configured sync between two Platform Instances ├── Flow ← The template/definition this mapping was built from ├── Platform Instance One ← Source platform instance ├── Platform Instance Two ← Destination platform instance ├── Schedule ← Optional: when to run automatically ├── Flow Stage(s) ← Ordered processing steps with configuration │ └── Stage ← The reusable processing logic └── Process(es) ← Each time this Flow Mapping runs ├── Process Record(s) ← One entry per individual data record ├── Process Log(s) ← Execution log entries ├── Process File(s) ← Data files ingested during this run └── Stage Log(s) ← Per-stage execution details
Glossary
Account
The top-level organizational unit in SuperSync. Every resource in the system — platform instances, flow mappings, schedules, and processes — belongs to an Account.
An Account typically represents a single customer or organization. It holds:
- API Key — A 32-character cryptographically secure key used to authenticate inbound API requests to SuperSync on behalf of this account.
- Environment — Whether the account operates in
ProductionorSandboxmode. - Status —
ActiveorInactive. Inactive accounts cannot run flows. - NetSuite Concurrency Limit — Controls how many parallel jobs can be dispatched to NetSuite at once (default: 5).
- Notification Settings — Email recipients, daily digest preferences, and whether to send alerts only on failures.
- Timezone — Used to interpret schedule times correctly.
- SSEO License — Tracks whether an SSEO (Single Sign-On / licensing) integration is active for this account.
Example: "Acme Corp" is an Account with two Flow Mappings: one syncing Shopify orders to NetSuite, and another syncing NetSuite inventory back to Shopify.
Platform
A platform type registered in SuperSync — representing an external system that SuperSync knows how to communicate with.
Each Platform record defines:
- Name — Display name (e.g., "Shopify", "NetSuite", "Salesforce")
- Slug — URL-safe identifier used to look up the platform's adapter class (e.g.,
shopify,netsuite) - Library — The fully qualified PHP class name of the platform's adapter implementation (e.g.,
App\Platform\Shopify) - Credentials Template — A JSON schema describing what credential fields are required when setting up a Platform Instance of this type
- Logo — Stored image for display in the UI
Platforms are system-level records and are not created per-account. They represent the available integration catalog.
See also: Building a New Platform Integration for how to implement a platform adapter.
Platform Instance
A configured, account-owned connection to a specific external system. A Platform Instance is created when an account connects their own credentials (API keys, tokens, account IDs) to a Platform type.
Key properties:
- Account — Which account owns this instance
- Platform — The platform type this instance connects to
- Credentials — Encrypted authentication details (API keys, tokens, account IDs, etc.)
- OAuth Records — If the platform uses OAuth, the active tokens are stored as linked OAuth Records
Think of a Platform as the type (e.g., Shopify) and a Platform Instance as your Shopify store (e.g., "Acme Corp's Shopify store with a specific API key").
Important: Credentials are encrypted at rest and automatically decrypted when accessed by the processing engine.
Platform Integration
A declared pairing of two platforms that SuperSync knows how to connect. A Platform Integration defines which two platform types can be used together and is the prerequisite for creating Flows.
Key properties:
- Platform One / Platform Two — The two platforms in this integration
- Name — Display name (e.g., "Shopify → NetSuite")
- Public — Whether this integration is available to all accounts or restricted
- Library — An optional custom class that overrides default integration behavior
- Flows — The flow templates available for this integration pair
A Platform Integration is a system-level configuration; it does not belong to a specific account.
Flow
A reusable template that defines a data sync pattern between two platform types. Flows are system-level definitions that describe what kind of data moves between which platforms and how the default mapping should look.
Key properties:
- Description — Human-readable name (e.g., "Shopify Orders → NetSuite Sales Orders")
- Platform Integration — The integration pair this flow belongs to
- Mapping Template — The default field mapping definition for this flow
- Scheduler Command — Optional command registered with the external scheduler service
- Flow Configurations — Default configuration values and form fields for this flow
- Special Account — If set, restricts the flow to a single account
Flows are the blueprints. Accounts do not use Flows directly — they create Flow Mappings based on a Flow.
Flow Mapping
The account-specific instance of a Flow — this is the live, configured sync that actually runs. A Flow Mapping connects two specific Platform Instances, inherits from a Flow template, and can be scheduled or triggered manually.
Key properties:
- Account — Owner account
- Flow — The flow template this mapping is based on
- Name — A human-readable label for this mapping
- Platform Instance One / Two — The source and destination platform instances
- Direction —
one_to_twoortwo_to_one, controlling which platform is the source - Enabled — Whether this mapping is active and can be executed
- Schedule — Optional link to a Schedule for automatic execution
- Source Type — How source records are fetched (
mapping,search, etc.) - Flow Stages — The ordered list of processing stages configured for this mapping
- Flow Configurations — Account-specific configuration values
Example: "Acme Corp's Shopify Orders → NetSuite" is a Flow Mapping. It connects Acme's Shopify Platform Instance to Acme's NetSuite Platform Instance, and runs nightly via a Schedule.
Flow Stage
A configured, ordered processing step within a specific Flow Mapping. Flow Stages are the account-specific instances of Stages — they carry the configuration values that customize how that Stage behaves for this particular mapping.
Key properties:
- Stage Slug — Identifies which Stage class to execute
- Flow Mapping — The mapping this stage belongs to
- Configurations — Encrypted key-value settings specific to this stage instance
- Flow Stage Version — The currently active version of this stage's configuration
- Flow Stage Candidate — A staged/test version of the configuration (not yet promoted to active)
Flow Stages are versioned, so configuration changes are tracked over time and can be rolled back.
Stage
A reusable, self-contained unit of processing logic in the data pipeline. Stages are the building blocks that transform, filter, validate, enrich, or route data as it moves through a Flow.
Every Stage has:
- Name — Display name shown in the UI
- Description — What this stage does
- Form — Configuration fields the user can set per Flow Stage instance
mutate()method — The core logic: receives a set of Mutations (records in transit), transforms them, and returns the modified set- Anonymous flag — If
true, the stage is hidden from the UI (used for internal processing steps)
Stages are shared across all flows and accounts — the configuration is what makes each Flow Stage instance unique.
Example stages include: field mapping, filtering, deduplication, enrichment from a third-party lookup, and format conversion.
Mapping Template
Defines the field-level mapping structure for a Flow. A Mapping Template specifies which fields from the source platform correspond to which fields in the destination platform, as well as how those values should be transformed.
Key properties:
- Flow — The flow this template belongs to
- Name — Template name
- Record Type — The type of record this mapping applies to
- Type One / Type Two — Source and destination record type identifiers
- Mapping — The field mapping definition (stored as JSON)
- Form — UI configuration for the mapping editor
- Search Mapping — Defines how to look up existing records in the destination to avoid duplicates
Flow Configuration
A single key-value configuration setting for a Flow or Flow Mapping. Flow Configurations store the values that control how a flow behaves — such as which NetSuite subsidiary to use, what date range to sync, or whether to create or update records.
Key properties:
- Key — The configuration key (e.g.,
subsidiary_id) - Value — The stored value
- Type — Data type (string, boolean, select, etc.)
- Required — Whether this configuration must be set before the flow can run
- Mask — Whether the value should be hidden in the UI (for sensitive values)
Some Flow Configurations are defined at the Flow level (defaults), and accounts can override them at the Flow Mapping level.
Schedule
Defines when a Flow Mapping should run automatically. Schedules are managed by an external Scheduler service and registered with it when created or updated.
Key properties:
- Schedule — A cron expression defining recurrence (e.g.,
0 2 * * *= every day at 2:00 AM) - Kind — Scheduler type:
COMMAND,WEBHOOK, orLEGACY - Priority — Execution priority level
- Disabled — Whether this schedule is currently paused
- Last Run — Timestamp of the most recent successful execution
- Job ID — The ID assigned by the external Scheduler service
- Flow Mappings — One or more Flow Mappings that share this schedule
Multiple Flow Mappings can share a single Schedule if they should always run together.
Process
A single execution run of a Flow Mapping. Every time a Flow Mapping is triggered — whether by a Schedule, a manual action, a webhook, or an API call — a Process is created to track that execution from start to finish.
Key properties:
- Key — Unique identifier for this run
- Status — A human-readable message describing the current state of execution
- Account / Flow Mapping — What was executed and for whom
- Processed —
falsewhile running,truewhen the process has completed - Process Records — The individual data records handled in this run
- Process Logs — Execution event logs
- Stage Logs — Per-stage execution details
- State — The State Machine snapshot for this process
A Process is the audit trail for a single sync run. If something goes wrong, the Process and its children (records, logs, stage logs) are where you investigate.
Process Record
One individual data record within a Process. If a Process syncs 50 orders, there will be 50 Process Records — one per order.
Key properties:
- Data — The raw record payload (stored as JSON)
- Status — The outcome for this record:
pending,success,error,skipped, etc. - External ID — The ID assigned to this record in the destination system after a successful sync
- Record Reference — A human-readable reference value from the source record (e.g., an order number)
- Transaction Date — The date of the originating transaction in the source system
Process Records are color-coded in the UI based on their status, making it easy to identify failures at a glance.
Process Log
A timestamped log entry attached to a Process. Process Logs capture high-level execution events during a Process run (e.g., connection established, data fetched, processing complete). They are useful for tracking the overall flow of execution without needing to inspect individual records.
Key properties:
- Message — The log message
- Section — Which part of the pipeline generated this log
- Status — Severity/type of log entry
Process File
A data file ingested during a Process. Some integrations receive data as file payloads (e.g., CSV exports, EDI files) rather than API responses. Process Files represent those files and link to the Process Records parsed from them.
Key properties:
- File Name — Original filename
- Status — Processing status of the file
- Content — Archived file content (stored in the document store)
- Process Records — Records parsed from this file
Stage Log
Detailed execution output for a single Stage within a Process. Stage Logs capture what each Stage did during a Process run — the input it received, any transformations applied, and the output it produced. They are the most granular level of Process observability.
Stage Logs can be archived to a document store for long-term retention and retrieved on demand.
State / State Machine
The runtime execution engine that orchestrates a Process. The State Machine controls how data moves through the pipeline during a Process run. It manages:
- Fetching records from the source platform
- Dispatching records through each Stage in sequence
- Handling pagination when source data spans multiple pages
- Managing batch processing
- Coordinating resync operations
- Error recovery and retry logic
The State model stores a snapshot of the State Machine's current position during a Process run, allowing execution to be paused, inspected, and resumed.
Execution Modes:
- Normal Flow — Full fetch-transform-drop pipeline
- Resync — Re-processes specific records that previously failed or need updating
- Paginated — Automatically iterates through multi-page API responses
Mutation
A record in transit through the processing pipeline. When source data is fetched from a platform, each record is wrapped in a Mutation object. Stages receive a collection of Mutations, transform them, and pass the modified collection to the next stage.
Mutations carry the in-flight data payload and are the primary input/output type for Stage logic. A Stage's mutate() method receives Mutations and returns Mutations.
Raw Record
The unprocessed source data returned from a platform's fetch operation. Before records are converted into Mutations, the platform adapter returns them as Raw Records — the direct, unmodified response from the source API. The platform adapter's fetched() method is responsible for converting Raw Records into the standardized format expected by the pipeline.
Result
The outcome of attempting to write a record to the destination platform. After the pipeline processes a Mutation and the data is sent to the destination, the platform adapter's dropped() method converts the destination's API response into a Result. Results feed back into the Process to update the status of each Process Record (success, failure, etc.).
Connection
A live, configured HTTP client for communicating with an external platform's API. Connections are created by a platform adapter's connect() method and encapsulate the authentication, base URL, headers, and retry behavior needed to make requests to a specific platform instance.
Connections implement a standard request(Instruction) interface that returns a ResultCollection, keeping platform-specific HTTP details isolated from the rest of the pipeline.
OAuth Record
A stored OAuth token set for a Platform Instance. When a platform uses OAuth 2.0 for authentication, the resulting access token, refresh token, expiry, and related metadata are stored as an OAuth Record linked to the Platform Instance. SuperSync handles token refresh automatically when tokens expire.
ArbScript
SuperSync's built-in scripting engine for custom data transformation logic. ArbScript allows advanced users to write custom transformation scripts that run as part of a Stage. It is used when standard field mapping is insufficient and more complex conditional logic, loops, or data manipulation is needed.
ArbScript is processed by an external microservice (ARBSCRIPT_SERVICE_URI in environment configuration).
ArbScript vs. JSONata: Use JSONata for declarative field-level transformations. Use ArbScript when you need procedural logic, conditionals, or iterative operations that JSONata cannot express.
JSONata
A JSON query and transformation language used within Mapping Templates to define how individual fields are transformed from source to destination format. JSONata expressions can extract values, apply functions, perform arithmetic, and conditionally map fields.
JSONata evaluation is handled by an external microservice (JSONATA_SERVICE_URI in environment configuration).
Example: The expression
$uppercase(firstName) & ' ' & lastNameconcatenates and uppercases a name field during mapping.
Context
The execution environment in which a Process runs. Context determines how the State Machine behaves — specifically around splitting records into batches, WebSocket availability for real-time UI updates, and access to experimental features.
| Context | Description |
|---|---|
CLI | Executed from the command line (e.g., artisan commands) |
JOB | Executed as a background queue job (most scheduled runs) |
SETUP | Triggered from the UI during integration setup or testing |
RESYNC | Triggered from the UI to re-process specific failed records |
Resync
The operation of re-processing records that previously failed or need to be re-sent. When a Process Record fails to sync (e.g., due to a network error or validation issue in the destination), a Resync can be triggered to retry that specific record without re-running the entire flow. Resyncs run in the RESYNC context and target specific records by ID or reference.
Data Flow: End-to-End
The following describes what happens when a Flow Mapping is triggered:
1. TRIGGER
A Flow Mapping is triggered by a Schedule, manual UI action, webhook, or API call. A new Process is created to track this execution.
2. CONNECT
The State Machine calls the source platform adapter's connect() method to establish an authenticated Connection to the source Platform Instance.
3. FETCH (prep → fetch → fetched)
The source platform adapter's prep() method builds the API request. The Connection executes the request and returns Raw Records. The adapter's fetched() method converts Raw Records into Mutations.
4. STAGE PIPELINE
Each Mutation passes through the ordered Flow Stages in sequence. Each Stage's mutate() method transforms the data. Stages may filter, enrich, reformat, split, or skip records. Stage Logs capture input/output for each stage.
5. CONNECT (destination)
The State Machine establishes a Connection to the destination Platform Instance.
6. DROP (prep → drop → dropped)
The destination platform adapter's prep() method prepares the write request. Each Mutation (or batch of Mutations) is sent to the destination API. The adapter's dropped() method converts the API response into Results.
7. RECORD RESULTS
Each Result updates the corresponding Process Record (success, error, skipped, etc.). External IDs from the destination system are stored on the Process Record.
8. PAGINATION
If the source platform has more pages of data, the State Machine fetches the next page and repeats steps 3–7.
9. COMPLETE
Once all records are processed, the Process is marked as complete. The Schedule's last_run timestamp is updated if applicable. Notification emails are sent if configured on the Account.
Key Relationships at a Glance
| Entity | Belongs To | Has Many |
|---|---|---|
| Account | — | Platform Instances, Flow Mappings, Processes, Schedules |
| Platform Instance | Account, Platform | OAuth Records, Flow Mappings (as source or destination) |
| Platform | — | Platform Instances, Platform Integrations |
| Platform Integration | Platform (×2) | Flows |
| Flow | Platform Integration | Flow Mappings, Flow Configurations |
| Flow Mapping | Account, Flow | Processes, Flow Stages, Flow Configurations |
| Flow Stage | Flow Mapping | Flow Stage Versions |
| Schedule | Account | Flow Mappings |
| Process | Account, Flow Mapping | Process Records, Process Logs, Process Files, Stage Logs |
| Process Record | Process | — |
| Mapping Template | Flow | Flow Mappings |
