Skip to content
GitHub
Get Started

Your First Feature

This walkthrough describes a feature built end-to-end following the vertical slice pattern used throughout Spectral. It touches every layer — domain, application, infrastructure, API — wired together through the composition root.

The examples reference the changesets feature inside spectral.platform as the worked target shape. Adapt the patterns to your own context.


Each context — spectral.worlds or spectral.platform — spans three Clean Architecture layers inside src/spectral/<context>/ and surfaces through a router in apps/api/src/spectral_api/routers/:

src/spectral/<context>/
├── domain/
│ ├── {feature}/
│ │ ├── models.py # Frozen Pydantic entities + pure domain functions
│ │ └── exceptions.py # Feature-specific error types
│ └── ...
├── application/
│ ├── {feature}/
│ │ ├── repositories.py # Repository Protocol definitions
│ │ └── lifecycle.py # Application services (orchestration)
│ └── ...
├── infrastructure/
│ └── {feature}/
│ └── repositories.py # Concrete repo implementations (SQL)
└── contracts/ # Producer-owned public surface (per ADR-065)
├── events/ # Typed event payloads this context publishes
└── protocols/ # Callee-owned OHS Protocols this context publishes
apps/api/src/spectral_api/
├── routers/{feature}.py # HTTP endpoints
├── dependencies.py # Composition root — wires infra → app
└── models/ # Request/response Pydantic models

Add your entity to domain/{context}/models.py. Domain models are frozen Pydantic models — immutable value objects with no I/O and no framework dependencies.

Here is how the ChangeSet entity is defined in src/spectral/platform/domain/changesets/models.py:

from __future__ import annotations
import uuid
from datetime import UTC, datetime
from typing import Literal
from pydantic import BaseModel, ConfigDict, Field
class ChangeSet(BaseModel):
"""Unit of change management — a versioned workspace-level snapshot."""
model_config = ConfigDict(frozen=True)
id: uuid.UUID = Field(default_factory=uuid.uuid4)
workspace_id: uuid.UUID
version: int = 1
status: Literal["proposed", "accepted", "superseded", "validated", "rejected"] = "proposed"
created_at: datetime = Field(default_factory=lambda: datetime.now(UTC))
updated_at: datetime = Field(default_factory=lambda: datetime.now(UTC))

Conventions to follow:

  • model_config = ConfigDict(frozen=True) — all domain models are immutable.
  • Use Field(default_factory=...) for mutable defaults and auto-generated IDs.
  • State transitions are pure functions that return a new instance via model_copy(update={...}).
  • Domain functions live in the same models.py file alongside the entities they operate on.

Module and package conventions:

  • No underscore-prefixed module files — use helpers.py, not _helpers.py. The underscore prefix is reserved for private names within a module, not for module filenames.
  • Public API is defined by __init__.py re-exports using the self-alias pattern:
    # domain/{context}/__init__.py
    from .models import ChangeSet as ChangeSet
    from .models import transition_status as transition_status
    The as Name form signals to type checkers and readers that this is an intentional public re-export. Do not use __all__.
  • Consumers should import from the package (spectral.platform.domain.changesets) rather than reaching into internal modules (spectral.platform.domain.changesets.models) when possible.

If your context needs specific error types, add them to domain/{context}/exceptions.py, inheriting from the shared hierarchy in spectral.core.errors:

# domain/{context}/exceptions.py
from spectral.core.errors import InvalidTransitionError
class ChangeSetTransitionError(InvalidTransitionError):
def __init__(self, from_state: str, to_state: str, valid_targets: list[str] | None = None) -> None:
super().__init__("ChangeSet", from_state, to_state, valid_targets)

See Domain Model for the full entity reference.


Define the data-access contract in application/{context}/repositories.py. Protocols live in the application layer (not domain) because they describe what the use cases need, not what the domain is.

From src/spectral/platform/application/changesets/repositories.py:

from __future__ import annotations
from typing import TYPE_CHECKING, Protocol, runtime_checkable
if TYPE_CHECKING:
import uuid
from spectral.platform.domain.changesets.models import ChangeSet
@runtime_checkable
class ChangeSetRepo(Protocol):
"""Protocol for Change Set data access."""
def create(self, changeset: ChangeSet) -> ChangeSet: ...
def get_by_id(self, changeset_id: uuid.UUID) -> ChangeSet | None: ...
def list_by_workspace(
self,
workspace_id: uuid.UUID,
*,
status: str | None = None,
limit: int = 50,
) -> list[ChangeSet]: ...
def update_status(self, changeset_id: uuid.UUID, new_status: str) -> None: ...

Key patterns:

  • @runtime_checkable — allows isinstance() checks if needed.
  • TYPE_CHECKING guard — domain imports only happen at type-check time, keeping the module lightweight at runtime and satisfying the architecture validator.
  • Method signatures use domain types, not raw dicts or ORM objects.

Create or extend a service in application/{context}/. Services orchestrate domain logic and repository calls. They receive repositories through constructor injection and return Result[T] for expected failures.

From src/spectral/platform/application/changesets/lifecycle.py:

from spectral.core.result import Failure, Result, Success
from spectral.platform.domain.changesets.exceptions import ChangeSetTransitionError
from spectral.platform.domain.changesets.models import ChangeSet, transition_status
from spectral.core.errors import NotFoundError
class ChangeSetLifecycleService:
"""Application service for Change Set lifecycle operations."""
def __init__(self, changeset_repo, agent_config_repo, explainability_repo):
self._cs_repo = changeset_repo
self._cfg_repo = agent_config_repo
self._ex_repo = explainability_repo
def accept_changeset(
self, changeset_id: uuid.UUID, workspace_id: uuid.UUID | None = None,
) -> Result[ChangeSet]:
changeset = self._cs_repo.get_by_id(changeset_id, workspace_id=workspace_id)
if not changeset:
return Failure(NotFoundError("ChangeSet", changeset_id))
try:
accepted = transition_status(changeset, "accepted")
except ChangeSetTransitionError as e:
return Failure(e)
self._cs_repo.update_status(changeset_id, "accepted", workspace_id=changeset.workspace_id)
return Success(accepted)

The Result pattern:

  • Success(value) for happy paths.
  • Failure(DomainError) for expected business failures (not found, invalid transition, etc.).
  • Programming errors (invariant violations) still raise exceptions.
  • The API layer pattern-matches on the result to produce the right HTTP response.
# The Result type is defined in spectral.core.result
type Result[T] = Success[T] | Failure

Implement the protocol in infrastructure/{context}/repositories.py. This is where SQL, external API calls, and other I/O live.

From src/spectral/platform/infrastructure/changesets/repositories.py:

from spectral.platform.domain.changesets.models import ChangeSet
class ChangeSetRepository:
"""Data access for Change Set lifecycle operations."""
def __init__(self, conn):
self._conn = conn
def get_by_id(self, changeset_id: uuid.UUID, workspace_id: uuid.UUID | None = None) -> ChangeSet | None:
with self._conn.cursor() as cur:
cur.execute(
"SELECT id, workspace_id, version, baseline_id, "
"evaluation_framework_snapshot_id, sample_set_id, status, "
"created_at, updated_at "
"FROM public.change_sets WHERE id = %s AND workspace_id = %s",
(str(changeset_id), str(workspace_id)),
)
row = cur.fetchone()
if not row:
return None
return self._row_to_changeset(row)
def _row_to_changeset(self, row) -> ChangeSet:
return ChangeSet(
id=uuid.UUID(str(row[0])),
workspace_id=uuid.UUID(str(row[1])),
version=row[2],
baseline_id=uuid.UUID(str(row[3])) if row[3] else None,
evaluation_framework_snapshot_id=uuid.UUID(str(row[4])),
sample_set_id=uuid.UUID(str(row[5])),
status=row[6],
created_at=row[7],
updated_at=row[8],
)

Conventions:

  • Constructor takes a raw conn (psycopg2 connection) — no ORM.
  • Private _row_to_{entity} mapper converts database rows to domain models.
  • All queries include workspace_id for tenant isolation.
  • The class does not declare that it implements the protocol — Python structural typing handles this automatically. As long as the methods match, it satisfies the protocol.

Add endpoints in apps/api/src/spectral_api/routers/{context}.py. Routers are thin — they parse HTTP, call the service, and format the response.

From apps/api/src/spectral_api/routers/changesets.py:

from fastapi import APIRouter, Depends, HTTPException
from spectral.platform.application.changesets.lifecycle import ChangeSetLifecycleService
from spectral.core.result import Failure, Success
from spectral_api.dependencies import get_changeset_service, get_db_connection
from spectral_api.middleware.auth_v2 import AuthContext, require_workspace_match
from spectral_api.problem_details import problem_details_response
router = APIRouter(tags=["changesets"])
@router.get(
"/api/workspaces/{workspace_id}/changesets/{changeset_id}",
response_model=ChangeSetDetailResponse,
)
def get_changeset(
workspace_id: str,
changeset_id: str,
_auth: AuthContext = Depends(require_workspace_match("read:workspace")),
svc: ChangeSetLifecycleService = Depends(get_changeset_service),
):
match svc.get_changeset_detail(uuid.UUID(changeset_id)):
case Failure(error):
return problem_details_response(error)
case Success(detail):
pass
# ... build and return the response model

Patterns to follow:

  • Depends(require_workspace_match("scope")) for RBAC on every route.
  • Depends(get_changeset_service) injects the fully-wired service from the composition root.
  • match/case on ResultFailure maps to problem_details_response() (RFC 9457), Success continues to build the response.
  • Response models are separate Pydantic classes defined in the router file or in spectral_api/models/.

Wire your new service in apps/api/src/spectral_api/dependencies.py. This is the only file that imports from both spectral.<context>.application and spectral.<context>.infrastructure.

dependencies.py
def get_changeset_service(db=Depends(get_db_connection)):
"""Provide a fully-wired ChangeSetLifecycleService."""
from spectral.platform.application.changesets.lifecycle import ChangeSetLifecycleService
from spectral.platform.infrastructure.changesets.repositories import (
AgentConfigRepository,
ChangeSetRepository,
ExplainabilityRepository,
)
return ChangeSetLifecycleService(
changeset_repo=ChangeSetRepository(db),
agent_config_repo=AgentConfigRepository(db),
explainability_repo=ExplainabilityRepository(db),
)

Why lazy imports inside the function? The composition root is the boundary — by importing inside the function body, the module-level import graph stays clean, and the architecture validator sees that only dependencies.py bridges the application/infrastructure gap.

Then register your router in apps/api/src/spectral_api/main.py:

from spectral_api.routers import changesets
app.include_router(changesets.router)

Spectral uses layered test markers. For a new feature, write at minimum:

Test pure domain functions with no mocks, no I/O. These run in milliseconds.

tests/platform/domain/changesets/test_models.py
import pytest
from spectral.platform.domain.changesets.models import ChangeSet, transition_status
from spectral.platform.domain.changesets.exceptions import ChangeSetTransitionError
class TestTransitionStatus:
def test_proposed_to_accepted(self, changeset_factory):
cs = changeset_factory(status="proposed")
result = transition_status(cs, "accepted")
assert result.status == "accepted"
def test_invalid_transition_raises(self, changeset_factory):
cs = changeset_factory(status="validated")
with pytest.raises(ChangeSetTransitionError):
transition_status(cs, "accepted")

Integration test — paths between contexts

Section titled “Integration test — paths between contexts”

If your feature touches more than one context, integration tests are non-negotiable per AGENTS.md — exercising the full path (producer → substrate → consumer or caller → bridge → callee) against real infrastructure. Isolated unit tests do not satisfy this AC; each epic’s Definition of Done lists the integration test as a load-bearing requirement for work that spans contexts.

The two flow shapes — notification (events + ACL) and call (callee-owned OHS Protocol + bridge tool in apps/*) — are detailed in Contract Surfaces. For the bilateral contract test pattern that pins event drift between producer and consumer, see Testing — Bilateral contract tests.

If your feature is single-context and does not cross any of these seams, contract and unit tests are sufficient.

Test the HTTP boundary with a real (or test) database. These verify auth, status codes, and response shapes.

apps/api/tests/contract/routers/test_changesets.py
import pytest
@pytest.mark.contract
class TestListChangesets:
def test_requires_auth(self, client):
resp = client.get("/api/workspaces/123/changesets")
assert resp.status_code == 401
def test_returns_changeset_list(self, authenticated_client, workspace_id):
resp = authenticated_client.get(f"/api/workspaces/{workspace_id}/changesets")
assert resp.status_code == 200
assert "changesets" in resp.json()

Before pushing, run the full quality gate:

Terminal window
# Individual checks
uv run ruff check # Lint — 0 errors (incl. ANN family)
uv run ruff format --check # Format — 0 diffs
uv run ty check # Type check — 0 diagnostics (strict-max)
uv run pytest -m "unit or contract" -q # Fast tests
uv run python tools/quality/validate_architecture.py # Import boundaries
uv run python tools/quality/check_migration_naming.py # Migration naming
uv run python tools/quality/check_migration_compat.py # Migration expand/contract safety
uv run python tools/quality/check_deploy_manifest_coverage.py
# Or all at once (recommended)
bash tools/dev/precheck.sh

ty is the canonical Python type-check gate per type-checking (ADR-051); mypy still runs as an informational warning inside precheck.sh during the transition. The architecture validator will catch any import boundary violations — for example, if your application service accidentally imports from infrastructure, or your domain model imports from application. Fix these before pushing; they block CI.

bash tools/dev/start.sh brings up the full local stack — Supabase (Postgres + Auth + Storage + Realtime) plus the API and workers. The script is the canonical local entrypoint; CONTRIBUTING.md → Quick start in the repo root is the source of truth for prerequisites and one-shot setup.

Terminal window
bash tools/dev/setup.sh # one-shot environment bootstrap (first run only)
bash tools/dev/start.sh # boot the local stack

Every feature follows the same vertical path through the codebase:

StepLayerFileWhat you add
1Domaindomain/{context}/models.pyFrozen Pydantic entity + pure functions
2Applicationapplication/{context}/repositories.pyProtocol for data access
3Applicationapplication/{context}/{service}.pyService with constructor injection + Result
4Infrastructureinfrastructure/{context}/repositories.pySQL implementation + row mapper
5APIspectral_api/routers/{context}.pyEndpoint with Depends() + match/case
6Compositionspectral_api/dependencies.pyFactory function wiring infra to app
7Teststests/unit/..., tests/contract/...Domain unit tests + API contract tests

The context structure keeps each feature self-contained. When you open a context directory, everything related to that concept is right there — models, services, repos, exceptions — across all three layers.