ADR-032: Storage topology — single Supabase project, three application schemas, forward-only migrations
Status: Accepted (2026-04-20)
Context
ADR-001 (and its refinement in ADR-031) commits the codebase to three logical bounded contexts (spectral.core, spectral.worlds, spectral.platform). ADR-002 commits to Supabase as the platform (Postgres + Auth + Storage + Realtime + Edge Functions). Open going into TA-1: how the contexts map onto Supabase projects and Postgres schemas; the migration discipline; the rollback posture; the surface of the per-context schema.py query helper.
Three coupled questions surfaced during disposition that exceed the literal “storage topology” framing but cannot be cleanly separated: the storage topology depends on which tenancy mechanism is primary; the tenancy mechanism depends on the frontend data-access pattern. TA-1 closes with three linked ADRs (this one, ADR-033, ADR-034). This ADR captures the storage-topology calls; ADR-033 and ADR-034 capture the tenancy and frontend-access calls that fall out of D7 here.
The naming-coherence pass in TA-3 (SPEC-306 D11) renamed the third context from scanning to platform and aligned the third Postgres schema to match. This ADR uses the post-rename names; no migrations had been authored under the old names.
Decision
D1 — Single Supabase project, single Postgres database
Supabase Auth binds 1:1 to a database; this is the only topology that preserves ADR-002’s auth and local-dev story. Co-failure of both contexts on a single DB outage is acknowledged, not a concern: the contexts are not designed to be independently functional in 0.3.0.
D2 — Three application schemas inside that database, one per context
core— contract tables shared across contexts (event outbox per TA-5, deployments registry per TA-19, retention registry, embedding profile, user mirror, shared lookups that must live in-DB)worlds— worlds context tables (rules, rule candidates, world models, eval corpus, world cards, provenance, World Agent memory per TA-12)platform— platform context tables (scans, change sets, evaluation results, scan traces, failure clusters, Spectral Agent memory, Operations Agent memory per TA-13)
public is reserved for Supabase-managed content. No app tables in public, ever. tools/quality/check_migration_naming.py enforces. auth, storage, realtime, extensions are Supabase-owned per the framework-owned-schemas allowlist (TA-14 D7).
D3 — Per-context database roles with scoped search_path
Roles spectral_core_app, spectral_worlds_app, spectral_platform_app (renamed in TA-3 D11) with GRANT scoped to their owning schema plus SELECT on core. Each app role’s search_path defaults to its own schema plus core. A table reference from the wrong role into another context fails at the database layer — belt-and-suspenders against any Python import between contexts that the architecture validator already forbids.
The cross-context SELECT defaults documented here are subsequently constrained by ADR-063 — no SQL grants ship at any layer for application-table reads between contexts. The core-schema SELECT remains, since core is shared substrate, not an application surface between contexts.
D4 — Forward-only migrations as default; optional first-class down scripts
Single linear stream in supabase/migrations/, named YYYYMMDDHHMMSS_<schema>_<verb>_<object>.sql. Each migration targets exactly one schema; cross-schema migrations are permitted only for core additions and trigger ADR-065 admission review.
A companion YYYYMMDDHHMMSS_<schema>_<verb>_<object>.down.sql is optional but first-class when authored — tested, reviewed, and shipped alongside the up migration. Not dev-only sugar. Incident responders choose PITR vs down-script per incident: PITR shines for data-corrupting incidents (bad UPDATE, column drop with live data); down scripts shine for schema-regret incidents where PITR would discard unrelated concurrent writes.
Destructive changes (drop column, drop table) ship as two-phase migrations regardless: deprecate (rename or add replacement) in release N, remove in release N+1. Pre-deploy staging-apply gate required in CI per TA-26.
D5 — Schema-name constants land per-context, not centrally
Each context gets its own <context>/infrastructure/persistence/schema.py with a single SCHEMA: Final = "<context>" constant when that context’s first repo lands. spectral.core gains a SCHEMA_CORE: Final = "core" constant when the first core schema table lands (TA-5 event outbox is the first; constant landed alongside SPEC-308 / SPEC-319). No multi-schema registry in core — schema names are ownership boundaries, not contracts between contexts.
Alternatives considered
Single public schema, single DB (status quo of the placeholder migration). Rejected: loses context-boundary reinforcement at DB layer; joins between contexts become visually indistinguishable from intra-context joins.
Three separate Supabase projects. Rejected: breaks Supabase Auth’s 1:1 DB binding; breaks Postgres LISTEN/NOTIFY (TA-5’s leading candidate) across contexts; triples ops surface area at solo-builder cadence.
Three databases inside one Postgres cluster. Rejected: requires postgres_fdw or logical replication to recover event flow between contexts; loses shared RLS predicate reference.
Reversible up/down migration discipline as standing posture. Rejected as the default. Forward-only is the prod baseline; down scripts are a first-class option per migration, not a blanket mandate.
Consequences
- Architecture validator gains a follow-on check: forbid schema-qualified table refs across context boundaries (e.g.,
platform.*table refs insidespectral.worlds). Cheap addition; landed alongside TA-5 substrate work. Supplemented by ADR-063’s inter-context SQL access rule. - TA-5 event substrate is unblocked: Postgres LISTEN/NOTIFY stays viable (single DB); event outbox lives in
core. - TA-3 connection pooling inherits a constraint: the checkout hook must set both
search_path(per-context) and theapp.workspace_idsession variable (for the RLS backstop in ADR-033). - Per-context role names settled.
spectral_core_app/spectral_worlds_app/spectral_platform_app(the rename fromscanningtoplatformhappened in TA-3 D11). - Future “worlds needs its own region or DB” is a cleaner lift from a schema boundary than from a
public-namespace monolith. tools/quality/check_migration_naming.pylanded at commit465ba85and enforces D2 + D4 from disposition close. TheFRAMEWORK_OWNED_SCHEMASallowlist (TA-14 D7) was added later for Supabase-owned schemas (auth, storage, realtime, langgraph).
References
- ADR-001 — three-context topology
- ADR-002 — retired; original Supabase platform decision distilled into the ADR-046 addendum
- ADR-065 —
spectral.coreadmission discipline (cross-schema migration trigger) - ADR-031 — single-library package; context-as-code-boundary
- ADR-033 — tenancy enforcement layering
- ADR-034 — frontend data access via API proxy
- ADR-041 — TA-3 connection pooling (D11 schema/role naming)
- ADR-044 — TA-5 event substrate (outbox in
core) - ADR-063 — no SQL grants between contexts for application data
- TA-1 disposition — SPEC-304 comment
5c9c25f0 - TA-1 verification — SPEC-304 comment
1428127b - TA-1 amendment — SPEC-304 comment
f5f94a83 tools/quality/check_migration_naming.py— lint (commit465ba85)supabase/migrations/— migration tree