- Payload CMS Database Migrations: Disable Push Mode and Run Migration-Only in Production
Payload CMS Database Migrations: Disable Push Mode and Run Migration-Only in Production
From dev-mode push to production-grade migrations in PayloadCMS + Postgres—safely, step by step.

Need Help Making the Switch?
Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.
Book Hourly AdvisoryRelated Posts:
I was three weeks into a Payload CMS project when I first ran pnpm payload migrate on staging and saw this prompt: "It looks like you've run Payload in dev mode, meaning you've dynamically pushed changes to your database." That moment makes clear exactly where you are. Push mode got you this far fast, and now it's standing between you and production-grade deployments.
The transition is a clear sequence: back up the database, disable push in the adapter config, handle the dev-mode marker in payload_migrations, generate a no-op baseline that records your current schema as the starting point, and wire migrations into CI. After that, every schema change goes through a migration file—reviewable, reversible, and safe to run in a pipeline.
Who this is for
This guide is for teams running PayloadCMS 3.x on Postgres in production who still have database push (dev mode) enabled and now need to switch to a database migration-only workflow. You'll make this change to gain control and auditability over schema changes, ship safer deployments, and eliminate the risk of uncontrolled schema drift in production.
Assumptions & scope
To keep this practical and accurate, the steps assume Payload 3.x (validated with payload@3.80.x and @payloadcms/db-postgres@3.80.x), Postgres via @payloadcms/db-postgres, and Drizzle migrations managed by Payload. Commands use pnpm, but you can adapt them to npm/node. Recent Payload versions added --skip-empty and --force-accept-warning flags to migrate:create—both are covered where relevant. This guide intentionally excludes Mongo/SQLite and other adapters, and it stays provider‑agnostic so you can apply it with any Postgres host.
TL;DR — Transition checklist
If you just need the steps, follow this sequence end‑to‑end. The sections below explain each step in more detail and include context and verification guidance.
- Commit a clean working tree
- Create a Postgres backup (do not skip)
- Ensure
push: falseandmigrationDirinpostgresAdapter - Restart your app/dev server to apply config
- Check for
payload_migrationsdev marker (batch = -1/name = 'dev') - Remove the marker if present (one-time)
- Generate a baseline migration
- Generate a blank no-op baseline with
--force-accept-warning, or manually clear theupbody - Run migrations (records baseline without schema changes)
- Verify success and that push is truly off
- Run
pnpm payload migrate:statusto confirm the baseline is recorded - Adopt the schema → data → constraints workflow
- Configure multi-env and run order (local/staging → production)
- Add CI step: migrate → build → start; fail build on migration errors
- Keep a rollback plan with tested backups
Safety first: backups, commits, staging
Start from a fully recoverable state. Commit your work, ensure the pipeline is green, take a fresh Postgres backup, and—if possible—rehearse the transition on staging before touching production.
Before you change anything:
- Commit all changes and ensure CI is green.
- Take a database backup and store it securely (not in git).
- If you have staging, perform all steps there first.
Command (use environment variables and placeholders; do not paste secrets):
# Ensure the backups directory exists
mkdir -p backups
# Use your Postgres URL from environment variables
# Example: export DATABASE_URL="postgres://user:pass@host/db?sslmode=require"
pg_dump "$DATABASE_URL" > "backups/backup_$(date +%Y%m%d_%H%M%S).sql"
Core concepts:
pg_dumpcreates a logical snapshot of your DB. It is fast and safe to run online.- Keep backups encrypted and accessible to the recovery team. Periodically test restore.
Turn off push mode
Next, disable automatic schema push and explicitly set your migrations directory in the Postgres adapter. This prevents uncontrolled schema changes and ensures only migrations can modify the database.
Edit payload.config.ts to include:
db: postgresAdapter({
pool: {
connectionString: process.env.DATABASE_URL,
},
push: false,
migrationDir: './src/migrations',
}),
Core concepts:
- Push mode auto-applies schema changes from code at startup and is great for local prototyping, not production.
- With
push: false, schema changes only happen when you run migrations. - You must restart the server/dev process after changing adapter settings; the old config is cached in the running process.
Verification tip:
- After restart, introduce a harmless schema change locally and confirm it does not apply automatically. Revert the change.
Clear the dev-mode migration marker (one-time)
If this database previously ran Payload with dev push enabled, running pnpm payload migrate will trigger this prompt:
"It looks like you've run Payload in dev mode, meaning you've dynamically pushed changes to your database. If you'd like to run migrations, data loss will occur."
What Payload does with each response: Answering yes lets the migration proceed despite the warning—potential data loss can occur if the migration tries to recreate objects that push mode already created. Answering no cancels the migration and leaves the database untouched. For this transition workflow, the goal is to clear the marker so the prompt never appears again and pnpm payload migrate runs non-interactively in CI.
The prompt is triggered by a row in payload_migrations with batch = -1 (usually name = 'dev'). Remove that marker once, then continue.
You can do this in Neon SQL Editor, pgAdmin, or any SQL client connected to the same database:
-- Check if the marker exists
select id, name, batch
from payload_migrations
where batch = -1 or name = 'dev';
-- Remove the marker (one-time)
delete from payload_migrations
where batch = -1 or name = 'dev';
Core concepts:
push: falseprevents future auto-pushes.- The SQL step clears historical dev-mode state so migrations run non-interactively.
Generate a baseline migration
Now create a migration that captures your current production schema state. This gives you a clean starting point for all future, controlled changes without altering any existing data today.
Generate the migration:
pnpm payload migrate:create
If Payload detects no schema changes (which can happen if push mode already synchronized everything), you'll get an interactive prompt asking whether to proceed. Two flags skip that prompt entirely:
--skip-empty— skips the prompt and exits cleanly when no schema changes are detected. Useful in CI.--force-accept-warning— creates a blank migration file even without schema changes, which is exactly what the no-op baseline step needs.
You can also name a migration by passing a positional argument: pnpm payload migrate:create initial-baseline. Payload uses a timestamp by default, but a descriptive name makes the migration history easier to scan at a glance — especially useful for the baseline you're about to create.
Core concepts:
- The baseline file will contain SQL to create objects that already exist (because push mode previously created them). Running it as-is will fail with "already exists" errors.
- The next step converts this to a safe no-op so it only records a checkpoint in migration history.
Make the baseline safe (no‑op)
Convert the baseline's "up" step to a no‑op. This sidesteps "already exists" conflicts (types, tables, indexes) while recording a correct checkpoint in migration history.
The cleanest approach is to use --force-accept-warning when generating the baseline, which produces a migration file with an empty up body:
pnpm payload migrate:create --force-accept-warning
This creates a migration that runs without attempting to create any objects—exactly what you need for a baseline that records state without touching the schema.
If you already generated a baseline file and want to convert it manually:
- Open the generated migration file in
src/migrations/. - Remove all SQL from the
upfunction body, or replace it with a comment. - Keep
downinert (empty or a comment). - Save, then proceed to the next step.
Core concepts:
- The no-op baseline writes a record to the migration tracking table without attempting to recreate existing objects (enums, tables, indexes).
- Avoid wrapping every
CREATEstatement withIF NOT EXISTS—especially for enums—this is brittle and unnecessary for the baseline case.
Apply the baseline
Run migrations to record the baseline checkpoint. This aligns the database with the migration history without changing the schema.
Run:
pnpm payload migrate
Core concepts:
- You should see the baseline migration marked as migrated. No schema changes should occur, given it's a no-op.
- The migration tracking table (managed by Drizzle/Payload) will include an entry for this baseline.
Verification:
- Check the CLI output for a successful migration entry.
- Sanity-check critical app flows; nothing should break or change.
New workflow going forward
From here on, adopt a safe, repeatable pattern that minimizes production risk and makes reviews and rollbacks straightforward.
Pattern:
- Schema migration (additive, nullable)
- Data migration (backfill/transform)
- Constraint migration (make non-null, add FKs, enforce uniqueness)
Core concepts:
- Keep migrations small and focused; one concern per migration.
- Never edit a committed migration; create a new one to change direction.
- Review migration files in PRs like application code.
For a detailed walkthrough of this pattern in practice—including how to add a required column to a live table without downtime—see How to Update Schema in Production with Payload CMS Without Losing Data.
Multi‑environment setup
Use separate environment files and a consistent order of operations. This protects production and allows you to roll out safely through staging first.
Recommendations:
- Use
.env.localfor development, staged env files for staging/production (managed securely in CI). - Run order: local → staging → production.
- Prefer
push: falsein all environments for consistency. If you keep push enabled in dev temporarily, expect drift and conflicts.
Core concepts:
- The
DATABASE_URLcontrols which database is targeted by migration commands. - Ensure CI uses the correct environment variables without hardcoding secrets.
CI/CD integration
Add an explicit migration step to your pipeline so deploys fail fast when a migration cannot apply, rather than shipping incompatible code.
Sequence (provider-agnostic):
- Set environment variables (including
DATABASE_URL) - Run migrations:
pnpm payload migrate - Build the application:
pnpm build - Start the application
Core concepts:
- Treat a migration failure as a deployment failure; do not continue.
- Keep secrets in your CI secret store; never commit them.
Zero-downtime deployment pattern
Running migrations before building the new application version is the key to zero-downtime deployments. The running version of your app was built against the previous schema. An additive migration (adding a nullable column, adding a table) does not break the running version. Once the migration succeeds, you build and deploy the new code that depends on the new schema. Traffic is never interrupted.
The sequence in practice for Vercel, Railway, or Render:
- Set your build command to
pnpm payload migrate && pnpm build. - The deploy fails and rolls back automatically if the migration step exits with a non-zero code.
- Keep your migrations additive during the transition period. Add columns as nullable first, backfill data in a follow-up migration, then add constraints (NOT NULL, foreign keys) in a third migration. No version of the app ever reads a column that doesn't exist yet.
Verification and observability
Confirm the transition worked and set up simple checks to spot drift early. This builds confidence and shortens triage when something goes wrong.
Checklist:
- Migration command logs show the baseline was applied once and future migrations apply in order.
- Schema changes never apply automatically—if they do, push mode is not fully disabled or the process wasn't restarted.
- Run
pnpm payload migrate:statusto confirm the baseline appears as applied. This prints a table of all known migrations and their run state — more reliable than querying the tracking table directly. - Smoke test critical flows after every migration.
Troubleshooting matrix
Here are common errors mapped to quick fixes to reduce time‑to‑recovery.
- "type/index/table already exists": Baseline isn't a no-op. Regenerate with
--force-accept-warningor manually clear theupbody, then re-apply. - "push still applying changes":
push: falsenot set or server not restarted. Fix config and restart. - "migration not found" or not applied: Wrong
migrationDiror file naming. Verify config and paths. - CI migration step fails: Wrong env vars or missing DB privileges. Fix secrets/permissions and re-run.
- CI hangs waiting for input: Payload's interactive prompt is blocking the terminal. Add
--skip-emptyor--force-accept-warningto avoid interactive prompts in non-TTY environments. - Migration state unclear after an error: Run
pnpm payload migrate:statusto see exactly which migrations have been applied and which are pending before retrying anything.
Migration status and rollback commands
Use these commands to inspect database migration state and undo changes in controlled environments. Run pnpm payload migrate:status before and after any migration operation to keep an accurate picture of what's applied.
Check migration state
pnpm payload migrate:status
Prints a table of every migration file Payload knows about, showing which have been applied and which are pending. Run this before deploying to confirm nothing unexpected is queued, and after deploying to confirm everything applied cleanly.
Roll back the last batch
pnpm payload migrate:down
Runs the down function of the last batch of migrations, reversing those schema changes. Use this on staging when a migration needs to be revised before re-running. In production, a forward fix — a new migration that restores the desired schema — is almost always the safer path.
Additional commands
These are safe in development and staging; treat them as destructive in production.
| Command | What it does |
|---|---|
pnpm payload migrate:refresh | Rolls back all applied migrations, then re-runs them from the start |
pnpm payload migrate:reset | Rolls back all applied migrations without re-running |
pnpm payload migrate:fresh | Drops all database entities and re-runs all migrations from scratch |
migrate:fresh and migrate:reset will destroy production data. Limit them to development or disposable staging databases.
A note on community-documented flags
Some examples in community posts show pnpm payload migrate:down --count=3 or pnpm payload migrate:down --to=20241201_143022. These flags do not appear in the official Payload docs, which only document the basic "roll back last batch" behavior. Verify them empirically against your current Payload version before relying on them in CI.
Programmatic migration logic with MigrateUpArgs
Schema migrations generated by Payload handle structural changes (tables, columns, indexes). When you also need to transform or backfill data as part of a migration, use the MigrateUpArgs type to access Payload's Local API directly inside the migration file.
MigrateUpArgs is the typed argument passed to the up function in every Payload migration file:
import type { MigrateUpArgs, MigrateDownArgs } from '@payloadcms/db-postgres'
export async function up({ payload, req, db }: MigrateUpArgs): Promise<void> {
// schema or data logic here
}
export async function down({ payload, req, db }: MigrateDownArgs): Promise<void> {
// rollback logic here
}
The three properties give you full control:
db— direct Drizzle database access for raw SQL executionpayload— Payload's Local API for collection queries and mutationsreq— the request object, which carries the active transaction context
A practical example: adding a viewCount column and backfilling it for existing records in the same migration:
import type { MigrateUpArgs, MigrateDownArgs } from '@payloadcms/db-postgres'
import { sql } from '@payloadcms/db-postgres'
export async function up({ payload, req, db }: MigrateUpArgs): Promise<void> {
// Step 1: add the column
await db.execute(sql`ALTER TABLE "posts" ADD COLUMN "view_count" integer DEFAULT 0`)
// Step 2: backfill existing records using the Local API
const { docs } = await payload.find({
collection: 'posts',
limit: 0,
req,
})
for (const doc of docs) {
await payload.update({
collection: 'posts',
id: doc.id,
data: { viewCount: 0 },
req,
})
}
}
export async function down({ db }: MigrateDownArgs): Promise<void> {
await db.execute(sql`ALTER TABLE "posts" DROP COLUMN IF EXISTS "view_count"`)
}
Use the CLI (pnpm payload migrate:create) for pure schema changes. Use MigrateUpArgs with the Local API when a migration needs to read or write collection data alongside the schema change.
For larger data loads where hooks or side effects need to fire during import operations, Payload import triggers covers how to handle those reliably inside the migration context.
Rollback playbook
Have a minimal, practical plan to undo the last change. Favor forward fixes, but know when to restore from a tested backup for safety and speed.
Options:
- Prefer forward fixes: create a new migration that restores the desired schema/data.
- If data corruption/loss risk exists, restore from the most recent tested backup.
Core concepts:
- Test restore procedures periodically so you're confident under pressure.
- Avoid rewriting migration history; add new migrations instead. Only restore from backup if necessary.
For CLI commands to inspect migration state and roll back the last batch on staging, see the Migration status and rollback commands section above.
Conclusion
You've moved your production PayloadCMS project from dev‑mode push to a clean, database migration‑only workflow. Along the way, you took a safe backup, turned off push to stop uncontrolled schema changes, generated a no-op baseline so migration history matches reality, and applied it without touching existing data. You set up a practical development rhythm—schema, then data, then constraints—added environment discipline, wired migrations into CI with zero-downtime sequencing, and now have a troubleshooting reference and rollback plan ready. For teams that need data transformations alongside schema changes, the MigrateUpArgs programmatic API gives you Payload's Local API directly inside any migration file. From here, every change is deliberate, reviewable, and recoverable. If you're running Payload on distributed infrastructure — Kubernetes, ECS, or anything with multiple replicas — see Stop Running Payload Migrations at Runtime for how to gate replica startup on migration success.
If you need a Payload CMS migration specialist or want a senior engineer to review your migration setup and database architecture, I work with a small number of clients at a time.
Thanks, Matija
📚 Comprehensive Payload CMS Guides
Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.
Frequently Asked Questions
Comments
No comments yet
Be the first to share your thoughts on this post!

