The Prebid Sales Agent uses Alembic for database schema migrations, backed by SQLAlchemy 2.0 as the ORM. The project contains 156 migration files that track the complete evolution of the database schema from initial creation through every feature addition, rename, and optimization.
Migrations are stored in the alembic/versions/ directory. Each migration file contains an upgrade() function (to apply the change) and a downgrade() function (to reverse it), enabling both forward and backward movement through the schema history.
The migration history covers these core tables:
| Table | Purpose |
|---|---|
tenants |
Publisher accounts and configuration |
principals |
Advertiser/buyer accounts with auth tokens |
products |
Advertising product catalog |
pricing_options |
Pricing models attached to products |
media_buys |
Campaign proposals and active orders |
creatives |
Creative assets with approval status |
workflow_steps |
Human-in-the-loop approval tasks |
audit_logs |
Complete operational history |
adapter_config |
Per-tenant ad server configuration |
inventory_profiles |
Inventory categorization and metadata |
creative_agents |
AI creative review agent configuration |
signals_agents |
AI signals agent configuration |
auth_config |
Per-tenant SSO/OIDC configuration |
users |
Admin UI user accounts |
currency_limits |
Per-tenant budget and currency constraints |
contexts |
Session and conversation tracking |
strategies |
Campaign optimization strategies |
Key migrations in the project history include:
uv run alembic upgrade head
This applies all migrations that have not yet been run against the current database, bringing the schema up to the latest version.
# Apply the next 3 pending migrations
uv run alembic upgrade +3
uv run alembic downgrade -1
# Downgrade to a specific migration by its revision ID
uv run alembic downgrade abc123def456
# Show the current revision applied to the database
uv run alembic current
# Show all pending migrations (not yet applied)
uv run alembic history --indicate-current
# Full history
uv run alembic history
# Last 10 migrations
uv run alembic history -r -10:
Alembic can detect changes between your SQLAlchemy models and the current database schema, and auto-generate the migration:
uv run alembic revision --autogenerate -m "add targeting_rules column to products"
This creates a new file in alembic/versions/ with an auto-generated upgrade() and downgrade() function. Always review the generated migration before applying it.
For changes that auto-generation cannot detect (renames, data transformations, custom SQL):
uv run alembic revision -m "rename format_id to format_identifier"
This creates an empty migration file. Fill in the upgrade() and downgrade() functions manually:
"""rename format_id to format_identifier
Revision ID: abc123def456
Revises: previous_revision_id
Create Date: 2026-03-09 10:00:00.000000
"""
from alembic import op
# revision identifiers
revision = "abc123def456"
down_revision = "previous_revision_id"
branch_labels = None
depends_on = None
def upgrade() -> None:
op.alter_column("products", "format_id", new_column_name="format_identifier")
def downgrade() -> None:
op.alter_column("products", "format_identifier", new_column_name="format_id")
Each migration should make one logical change. If you need to add a column and create an index, those can go in one migration. If you need to rename a table and add a new unrelated table, use two separate migrations.
Every upgrade() must have a corresponding downgrade(). This enables rollback in production if a deployment fails. Auto-generated migrations include downgrades by default, but verify they are correct.
Before submitting a migration, verify that the upgrade/downgrade cycle works:
# Apply the new migration
uv run alembic upgrade head
# Roll it back
uv run alembic downgrade -1
# Apply it again
uv run alembic upgrade head
When you need to transform existing data (not just schema changes), use a data migration:
from alembic import op
from sqlalchemy import text
def upgrade() -> None:
# Add the new column
op.add_column("products", sa.Column("status", sa.String(20), server_default="active"))
# Backfill existing rows
op.execute(text("UPDATE products SET status = 'active' WHERE status IS NULL"))
# Now make it non-nullable
op.alter_column("products", "status", nullable=False)
def downgrade() -> None:
op.drop_column("products", "status")
op.execute() with LIMIT and a loop, or run the data migration as a separate step outside of Alembic.
In production migrations, prefer these non-destructive patterns:
| Instead of | Do this |
|---|---|
DROP COLUMN |
Add a deprecation comment, remove in a later release |
ALTER COLUMN type (incompatible) |
Add a new column, migrate data, drop old column in separate steps |
DROP TABLE |
Rename to _deprecated_<table>, drop in a later release |
NOT NULL on existing column |
Add with a default value, backfill, then add the constraint |
from alembic import op
import sqlalchemy as sa
def upgrade() -> None:
op.add_column("media_buys", sa.Column(
"priority",
sa.Integer(),
nullable=True,
server_default="0",
comment="Campaign priority for delivery ordering"
))
def downgrade() -> None:
op.drop_column("media_buys", "priority")
from alembic import op
import sqlalchemy as sa
def upgrade() -> None:
op.create_table(
"campaign_notes",
sa.Column("id", sa.String(36), primary_key=True),
sa.Column("tenant_id", sa.String(36), nullable=False),
sa.Column("media_buy_id", sa.String(36), nullable=False),
sa.Column("author", sa.String(255), nullable=False),
sa.Column("content", sa.Text(), nullable=False),
sa.Column("created_at", sa.DateTime(), server_default=sa.func.now()),
sa.ForeignKeyConstraint(
["tenant_id", "media_buy_id"],
["media_buys.tenant_id", "media_buys.id"],
),
)
def downgrade() -> None:
op.drop_table("campaign_notes")
from alembic import op
def upgrade() -> None:
op.create_index(
"ix_media_buys_status_start_date",
"media_buys",
["status", "start_date"],
)
def downgrade() -> None:
op.drop_index("ix_media_buys_status_start_date", table_name="media_buys")
from alembic import op
def upgrade() -> None:
op.alter_column("products", "format_ids", new_column_name="format_identifiers")
def downgrade() -> None:
op.alter_column("products", "format_identifiers", new_column_name="format_ids")
from alembic import op
def upgrade() -> None:
# PostgreSQL requires explicit ALTER TYPE for enums
op.execute("ALTER TYPE media_buy_status ADD VALUE IF NOT EXISTS 'paused'")
def downgrade() -> None:
# PostgreSQL does not support removing enum values
# Document this as a one-way migration
pass
In the Docker deployment, migrations run automatically on startup via the db-init service defined in docker-compose.yml. The db-init container:
alembic upgrade head.This ensures the database schema is always up to date before the application begins accepting requests.
To prevent migrations from running on startup (e.g., in a pre-production environment where you want manual control):
SKIP_MIGRATIONS=true docker compose up -d
Or set the environment variable in your docker-compose.yml override:
services:
db-init:
environment:
SKIP_MIGRATIONS: "true"
In CI, migrations run against a fresh PostgreSQL service container. The test.yml workflow creates a temporary database and applies all migrations before running integration tests.
The Sales Agent provides a standardized context manager for database sessions, used throughout the codebase:
from src.core.database.database_session import get_db_session
async def get_products_impl(identity: ResolvedIdentity, brief: str = None):
async with get_db_session() as session:
query = select(Product).where(Product.tenant_id == identity.tenant_id)
result = await session.execute(query)
products = result.scalars().all()
return products
The get_db_session() context manager:
AsyncSession.get_db_session() for database access. Do not create sessions manually or use the engine directly. This ensures consistent transaction management and connection pooling across the application.
When two developers create migrations from the same parent revision, Alembic detects a branch:
FAILED: Multiple head revisions are present
To resolve:
# See the conflicting heads
uv run alembic heads
# Merge the branches
uv run alembic merge -m "merge migration branches" head1_id head2_id
# Apply the merge
uv run alembic upgrade head
If the application fails to start with a message about pending migrations:
# Check what is pending
uv run alembic current
uv run alembic history --indicate-current
# Apply pending migrations
uv run alembic upgrade head
If a migration fails partway through, the database may be in an inconsistent state:
# Check current state
uv run alembic current
# If the failed migration left the alembic_version table pointing
# to a revision that was only partially applied, you may need to
# manually fix the database and stamp the correct version:
uv run alembic stamp <last_successful_revision>
# Then re-apply
uv run alembic upgrade head
This occurs when the alembic_version table references a revision that no longer exists in the code (e.g., after a rebase or squash):
# Stamp the database to the current head, bypassing history checks
uv run alembic stamp head
# Or reset completely (development only)
dropdb salesagent_dev && createdb salesagent_dev && uv run alembic upgrade head
If --autogenerate creates a migration with empty upgrade() and downgrade() functions:
alembic/env.py via the target_metadata setting.src/core/database/models.py.uv run alembic upgrade head first, then generate.