BuildWithMatija
Get In Touch
  1. Home
  2. Blog
  3. Payload
  4. Payload Postgres Adapter Guide: Drizzle Config & Migrations

Payload Postgres Adapter Guide: Drizzle Config & Migrations

Deep dive into payload.db.drizzle, schema hooks, migrations, read replicas, and practical Payload CMS Postgres best…

21st April 2026·Updated on:3rd April 2026·MŽMatija Žiberna·
Payload
Early Access

You are viewing this article before its public release.

This goes live on April 21, 2026 at 6:00 AM.

Payload Postgres Adapter Guide: Drizzle Config & Migrations

Need Help Making the Switch?

Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.

Book Hourly Advisory

@payloadcms/db-postgres is Payload's officially supported database adapter for PostgreSQL. It uses Drizzle ORM and node-postgres under the hood, and once Payload initialises, it exposes the full Drizzle client to you via payload.db.drizzle. You can run raw queries, use the relational query builder, execute SQL inside migrations, and extend the generated schema with tables Payload doesn't manage. You do not need to install drizzle-orm separately — everything you need re-exports from @payloadcms/db-postgres/drizzle. This article covers the full adapter config, how to access and use payload.db.drizzle, schema generation, migrations including the MigrateUpArgs type, and schema hooks for projects that need to add tables outside of Payload's collection model.

Tested with: Payload 3.x, @payloadcms/db-postgres 3.x, Node 20, PostgreSQL 16 (local Docker and Neon). The schema hooks and migration sections reflect the same API tested on a production client project and a multi-tenant platform I am actively building. Where behaviour differs between versions I note it.

I kept using postgresAdapter() in my config for over a year across multiple client projects before realising how much was sitting inside it. On one project I needed a custom analytics table alongside Payload's collections and wrote a separate Drizzle instance to manage it — which caused migration conflicts and a production incident when both tried to alter the same schema. The fix was beforeSchemaInit, which I had completely missed. On another project I hit replica stale reads after bulk writes because readReplicasAfterWriteInterval was set to 0 in an attempt to squeeze latency, and reads were landing on a replica that hadn't caught up yet. These are the kinds of details the official docs cover accurately but briefly. This guide fills in the gaps from real usage.

Installing the adapter

The @payloadcms/db-postgres package is installed alongside Payload in a standard setup:

pnpm add @payloadcms/db-postgres

If you are deploying to Vercel and want a package optimised for the Vercel Postgres connection pool, @payloadcms/db-vercel-postgres is also available:

pnpm add @payloadcms/db-vercel-postgres

The Vercel adapter is a thin wrapper that swaps the node-postgres driver for @vercel/postgres. Both adapters expose the same API — payload.db.drizzle, schema hooks, migration commands — so everything in this guide applies to both. One caveat: if your POSTGRES_URL points to localhost or 127.0.0.1, the Vercel adapter automatically falls back to the pg module, since @vercel/postgres does not work with local databases.

Full adapter configuration

The adapter call goes into the db field of your Payload config. Most projects only fill in pool, but the adapter exposes a range of options documented in the official Postgres adapter reference. Here is the full set with annotations on when each actually matters:

// File: src/payload.config.ts
import { buildConfig } from 'payload'
import { postgresAdapter } from '@payloadcms/db-postgres'

export default buildConfig({
  db: postgresAdapter({
    // Required — passed to node-postgres Pool
    pool: {
      connectionString: process.env.DATABASE_URL,
    },

    // Disable Drizzle push in development (default: enabled in dev only)
    push: false,

    // Where migration files are stored (default: ./src/migrations)
    migrationDir: './src/migrations',

    // 'serial' (default) or 'uuid' for id columns
    idType: 'uuid',

    // PostgreSQL schema namespace (default: 'public')
    schemaName: 'public',

    // Pass PgTransactionConfig or false to disable transactions globally
    transactionOptions: {
      isolationLevel: 'read committed',
    },

    // Prevent Payload from auto-creating the database if it doesn't exist
    disableCreateDatabase: false,

    // Table name suffixes for Payload's internal tables
    localesSuffix: '_locales',
    relationshipsSuffix: '_rels',
    versionsSuffix: '_v',

    // Override the output path for payload generate:db-schema
    generateSchemaOutputFile: './src/payload-generated-schema.ts',

    // Allow passing a custom id value on create operations
    allowIDOnCreate: false,

    // Read replicas — offload read-heavy traffic
    readReplicas: [
      { connectionString: process.env.DATABASE_READ_REPLICA_URL },
    ],

    // After a write, route reads to primary for this long (ms) to prevent stale replica reads
    readReplicasAfterWriteInterval: 2000,

    // Store blocks as JSON instead of relational tables — helps with large block counts
    blocksAsJSON: false,

    // Schema hooks — see the dedicated section below
    beforeSchemaInit: [],
    afterSchemaInit: [],

    // Run migrations at runtime on server startup instead of at build time
    // prodMigrations: migrations,
  }),
})

The options that most developers overlook:

readReplicas lets you offload SELECT traffic to a read replica without changing any application code. Payload routes read operations to the replica automatically after the write delay expires. readReplicasAfterWriteInterval is the guard that prevents stale reads immediately after a write — writes go to primary, and for the next 2 seconds reads also hit primary before the replica is trusted again.

blocksAsJSON is worth enabling for collections with a large number of blocks. Payload normally stores each block type as its own relational table, which produces many joins on reads. Storing blocks as a JSON column instead can significantly reduce query complexity for content-heavy sites, at the cost of losing the ability to query individual block fields via SQL.

transactionOptions sets the default isolation level for all transactions Payload manages. The Drizzle default is read committed for Postgres, which is fine for most applications. If you are running financial logic or anything where phantom reads matter, this is where you tighten it.

Push mode vs. migrations

Payload ships two ways to keep your database schema in sync with your Payload config: push mode and migrations. Understanding when each applies matters a lot, especially in teams.

Push mode uses Drizzle's db push internally. When you run the dev server, Payload watches your config and automatically pushes any schema changes — new fields, new collections, removed columns — directly to your local database. No migration files are created. This is the default in development and is designed to make local iteration fast.

Migrations are TypeScript files generated by payload migrate:create. Each file records the SQL diff between the previous state of your schema and the current one. Migrations are the only supported workflow for staging and production environments.

The critical rule from the official migration docs: do not mix push and migrations against the same database. If you use push locally and then run payload migrate, Payload will throw a warning because the migration system cannot reconcile a database that was modified by push with the migration history it expects. Treat your local dev database as a sandbox that push owns, and generate migrations as a separate step when you are ready to deploy.

In teams, this becomes a coordination question. The workflow that works well: each developer uses push locally against their own database instance, then once a feature is complete, one person runs payload migrate:create on a clean database that reflects the last committed migration, commits the resulting file, and everyone else runs payload migrate when they pull. Column renames are the main pain point with push — Drizzle interprets a rename as a drop-and-add, which triggers a data loss warning. That warning is accurate: push will drop the old column. For renames in development this is usually fine. In production, generate a migration and write the rename explicitly.

How Payload exposes the Drizzle instance

After Payload initialises, three properties are available on payload.db:

payload.db.drizzle is the full Drizzle client, initialised with your connection pool and the generated schema. You can use every Drizzle API on it — relational queries, select, insert, update, delete, raw SQL, transactions.

payload.db.tables is an object containing all of Drizzle's table definitions derived from your Payload collections. Each key is the table name in snake case.

payload.db.enums exposes any Postgres enums that Payload generated from your collection fields.

payload.db.relations exposes the Drizzle relation definitions used internally.

You do not need to install drizzle-orm as a separate dependency. Payload re-exports everything from @payloadcms/db-postgres/drizzle:

// File: src/app/api/stats/route.ts
import { eq, sql, and, or } from '@payloadcms/db-postgres/drizzle'

This import path gives you the full Drizzle operator library — eq, ne, gt, lt, like, inArray, sql, and, or, asc, desc, and everything else — without a separate package.

Generating the schema file

Before you can write type-safe Drizzle queries against your own collections, you need the generated schema file. Run:

npx payload generate:db-schema

This inspects your Payload config and outputs a file at ./src/payload-generated-schema.ts by default (override with generateSchemaOutputFile in the adapter config). The file contains Drizzle table definitions, relations, and enums for every collection, global, and internal Payload table.

You import from this file when writing queries:

// File: src/lib/db-queries.ts
import { posts, users } from './payload-generated-schema'

Re-run this command whenever you add or change a collection field. The generated file is not updated automatically — it is a snapshot you commit alongside your migrations. One important caveat: columns and tables added via beforeSchemaInit or afterSchemaInit hooks do not appear in the generated file unless you also mutate adapter.rawTables inside beforeSchemaInit. If you add custom tables via hooks and want them in the generated schema, that is the path.

Using payload.db.drizzle for queries

Relational query builder

The relational API is the cleanest way to fetch data when you need joins. It is available via payload.db.drizzle.query.*:

// File: src/lib/db-queries.ts
import { getPayload } from 'payload'
import config from '@payload-config'

export async function getRecentPostsWithAuthor() {
  const payload = await getPayload({ config })

  const posts = await payload.db.drizzle.query.posts.findMany({
    with: {
      author: true,
    },
    orderBy: (posts, { desc }) => [desc(posts.createdAt)],
    limit: 10,
  })

  return posts
}

The with field follows Drizzle's relational query syntax — you get full TypeScript inference on the result type, including nested relation shapes. This is useful for custom API routes or server actions where Payload's Local API would require multiple round-trips, or where you need a join Payload's query engine does not expose directly.

Select API with the sql template

For custom aggregations, full-text search, or anything that needs raw SQL expressions inside a structured query, use the sql template alongside the select API:

// File: src/app/api/analytics/route.ts
import { getPayload } from 'payload'
import config from '@payload-config'
import { eq, sql } from '@payloadcms/db-postgres/drizzle'
import { posts } from './payload-generated-schema'

export async function GET() {
  const payload = await getPayload({ config })

  const result = await payload.db.drizzle
    .select({
      status: posts.status,
      count: sql<number>`count(*)`.mapWith(Number),
    })
    .from(posts)
    .groupBy(posts.status)

  return Response.json(result)
}

The sql template is parameterised — any dynamic value inside ${} is passed as a prepared statement parameter, not interpolated into the query string. This means SQL injection is not possible through the template. The sql<T> generic tells TypeScript what type to expect back from the database expression, and .mapWith(Number) coerces the raw driver value at runtime.

Executing raw SQL

For cases where neither the relational API nor the select builder expresses what you need, db.execute() accepts the sql template directly:

// File: src/lib/raw-queries.ts
import { getPayload } from 'payload'
import config from '@payload-config'
import { sql } from '@payloadcms/db-postgres/drizzle'

export async function searchPostsFullText(searchTerm: string) {
  const payload = await getPayload({ config })

  const { rows } = await payload.db.drizzle.execute(
    sql`
      SELECT id, title, slug, ts_rank(search_vector, query) AS rank
      FROM posts,
           to_tsquery('english', ${searchTerm}) query
      WHERE search_vector @@ query
      ORDER BY rank DESC
      LIMIT 20
    `
  )

  return rows
}

The sql template handles table and column name escaping automatically. Any value you put inside ${} becomes a parameterised placeholder in the final query, so the searchTerm here is never interpolated as a string.

Extending the schema with hooks

beforeSchemaInit and afterSchemaInit are hooks that run during Payload's schema build phase, before migrations or push are applied. They give you a way to add tables and columns to the database schema that Payload does not manage.

beforeSchemaInit

Runs before Payload builds its schema. Use this to add tables from an existing database you are preserving, or to add entirely custom tables alongside Payload's collections:

// File: src/payload.config.ts
import { postgresAdapter } from '@payloadcms/db-postgres'
import { pgTable, serial, text, integer } from '@payloadcms/db-postgres/drizzle/pg-core'

postgresAdapter({
  beforeSchemaInit: [
    ({ schema, adapter }) => {
      return {
        ...schema,
        tables: {
          ...schema.tables,
          analytics_events: pgTable('analytics_events', {
            id: serial('id').primaryKey(),
            eventType: text('event_type').notNull(),
            postId: integer('post_id'),
            createdAt: text('created_at').notNull(),
          }),
        },
      }
    },
  ],
})

Payload will not try to manage analytics_events as a collection. It exists in the schema for migration tracking purposes, but Payload will not generate CRUD endpoints or admin UI for it. You query it directly via payload.db.drizzle using the table definition you added here.

If you also want this table to appear in the generated schema file from payload generate:db-schema, you need to mutate adapter.rawTables in addition to returning the schema:

beforeSchemaInit: [
  ({ schema, adapter }) => {
    adapter.rawTables.analytics_events = {
      name: 'analytics_events',
      columns: {
        id: { name: 'id', type: 'serial', primaryKey: true },
        event_type: { name: 'event_type', type: 'text', notNull: true },
        post_id: { name: 'post_id', type: 'integer' },
        created_at: { name: 'created_at', type: 'text', notNull: true },
      },
    }

    return {
      ...schema,
      tables: {
        ...schema.tables,
        analytics_events: pgTable('analytics_events', {
          id: serial('id').primaryKey(),
          eventType: text('event_type').notNull(),
          postId: integer('post_id'),
          createdAt: text('created_at').notNull(),
        }),
      },
    }
  },
],

afterSchemaInit

Runs after Payload has built its schema. Use afterSchemaInit with the extendTable utility to add columns or indexes to tables Payload already manages:

// File: src/payload.config.ts
import { postgresAdapter } from '@payloadcms/db-postgres'
import { index, integer } from '@payloadcms/db-postgres/drizzle/pg-core'

postgresAdapter({
  afterSchemaInit: [
    ({ schema, extendTable }) => {
      extendTable({
        table: schema.tables.posts,
        columns: {
          viewCount: integer('view_count').default(0),
        },
        extraConfig: (table) => ({
          view_count_idx: index('view_count_idx').on(table.viewCount),
        }),
      })

      return schema
    },
  ],
})

This adds a view_count column and index to the posts table. Payload's collection definition does not need to know about it — it lives outside the Payload config. You query it via payload.db.drizzle with a raw select or the sql template.

Migrations

The migration file structure

When you run payload migrate:create, Payload generates a migration file in your migrationDir. Every migration file exports two functions: up and down.

// File: src/migrations/20260401_add_view_count.ts
import { type MigrateUpArgs, type MigrateDownArgs, sql } from '@payloadcms/db-postgres'

export async function up({ db, payload, req }: MigrateUpArgs): Promise<void> {
  await db.execute(sql`
    ALTER TABLE posts ADD COLUMN IF NOT EXISTS view_count integer DEFAULT 0
  `)
}

export async function down({ db, payload, req }: MigrateDownArgs): Promise<void> {
  await db.execute(sql`
    ALTER TABLE posts DROP COLUMN IF EXISTS view_count
  `)
}

The MigrateUpArgs type gives you three things. payload is the full initialised Payload instance — you can use the Local API inside migrations. req is a request object that carries the active transaction — pass it to any payload.* or payload.db.* call to run that operation inside the migration's transaction. db is the raw Drizzle client, pointing at the same transaction, ready for db.execute().

Payload wraps each migration in a transaction automatically. If up throws at any point, the transaction rolls back and the database is left unchanged. You do not need to manage the transaction yourself.

Migration commands

All migration commands run via npm run payload (or pnpm payload, yarn payload):

# Run all pending migrations
npm run payload migrate

# Create a new migration file
npm run payload migrate:create optional-name

# Show which migrations have run and which have not
npm run payload migrate:status

# Roll back the last batch of migrations
npm run payload migrate:down

# Roll back all migrations, then re-run them all
npm run payload migrate:refresh

# Roll back all migrations
npm run payload migrate:reset

# Drop all tables and re-run all migrations from scratch
npm run payload migrate:fresh

For CI, the standard pattern is a ci script in package.json that runs migrations before the build:

{
  "scripts": {
    "dev": "next dev --turbo",
    "build": "next build",
    "payload": "cross-env PAYLOAD_CONFIG_PATH=src/payload.config.ts payload",
    "ci": "payload migrate && pnpm build"
  }
}

Your deployment platform uses ci as the build command. It connects to the production database, runs any pending migrations in order, and only then starts the Next.js build. If a migration fails, the deployment is rejected before any code ships.

Running migrations at server startup

For long-running servers or containers where running migrations at build time is not practical, Payload supports running them at initialisation:

// File: src/payload.config.ts
import { migrations } from './migrations'

export default buildConfig({
  db: postgresAdapter({
    pool: { connectionString: process.env.DATABASE_URL },
    prodMigrations: migrations,
  }),
})

Payload exports an index.ts from your migrations directory that aggregates all migration files. Importing that and passing it to prodMigrations tells Payload to run any pending migrations before completing initialisation — in production only. On serverless platforms like Vercel, this adds latency to cold starts, so it suits containers and long-lived servers more than function runtimes.

When to reach for raw Drizzle vs. payload.db.*

Use caseRecommended approach
Custom read queries, aggregations, full-text searchpayload.db.drizzle select or relational API
Bulk writes and large imports with transaction controlpayload.db.* with beginTransaction — see the large imports guide
Raw SQL inside a migrationdb.execute(sql...) via MigrateUpArgs.db
Adding a table Payload doesn't managebeforeSchemaInit hook
Adding a column to an existing Payload tableafterSchemaInit with extendTable
Standard CRUD in application logicPayload Local API (payload.create, payload.find)

The key distinction is read vs. write. For reads, raw Drizzle is almost always the right tool — it gives you full SQL expressiveness with TypeScript safety and no Payload lifecycle overhead. For writes, the question is whether you need hooks, validation, and access control to run. If you do, the Local API is the right choice. If you are doing bulk work, data migrations, or anything where skipping hooks is intentional, payload.db.* gives you transactional control inside Payload's abstraction. Raw Drizzle writes via payload.db.drizzle are the right choice when a single SQL statement — a bulk upsert, a set-based update — replaces thousands of individual operations.

Non-obvious gotchas from real Payload projects

These are the issues I have personally hit or debugged for others — not edge cases, but things that trip up most developers at least once.

1. Importing from drizzle-orm directly breaks at runtime.

The most common mistake when starting to use payload.db.drizzle is reaching for drizzle-orm as a package import:

// This will cause version mismatch errors at runtime
import { eq } from 'drizzle-orm'

Payload bundles its own version of Drizzle internally. If your project also has drizzle-orm installed — even the same version number — you will end up with two Drizzle instances that do not share the same column reference objects, and queries will silently fail or throw type errors. The fix is always to import from the re-export path:

import { eq, sql, and } from '@payloadcms/db-postgres/drizzle'

2. Column renames in push mode drop data without a clear warning.

When you rename a field in a Payload collection and push mode is active, Drizzle interprets this as a drop-plus-add — it drops the old column and creates a new empty one. The data loss warning Drizzle shows is easy to miss if you are not watching the dev server output carefully. If you rename a field that has data and the warning scrolls past, the column data is gone. For renames, always create a migration with payload migrate:create and write the rename as an explicit ALTER TABLE ... RENAME COLUMN statement.

3. payload.db.tables keys are snake_case with adapter suffixes, not collection slugs.

When you access payload.db.tables, the keys follow Drizzle's naming convention, not Payload's collection slugs. A collection with slug blogPosts becomes blog_posts. Localised fields live in a separate table with the _locales suffix (blog_posts_locales). Relationships get _rels. Versions get _v. If you are building a query that joins across these tables, you need to know the actual table names before writing the join, and the safest way to discover them is to run npx payload generate:db-schema and inspect the output file rather than guessing.

4. beforeSchemaInit tables do not appear in the generated schema unless you mutate adapter.rawTables.

This is documented but easy to miss. If you add a custom table via beforeSchemaInit and then run npx payload generate:db-schema, that table will not be in the output file. This means you cannot import the TypeScript table definition from the generated schema and use it in typed Drizzle queries. The fix is to mutate adapter.rawTables inside the same hook alongside returning the schema — the raw tables object is what the schema generator reads, not the Drizzle schema object you return from the hook. I showed this in the beforeSchemaInit section above.

5. Setting readReplicasAfterWriteInterval: 0 produces stale reads after bulk writes.

The default value of 2000 ms exists for a reason. After a write to the primary, Postgres replication to a read replica is not instant — it typically takes tens to hundreds of milliseconds depending on network and load. If you set readReplicasAfterWriteInterval to 0 to reduce latency, reads immediately after a write can land on a replica that has not yet received the written data. The most visible symptom is an API endpoint that creates a record and then immediately fetches it returning an empty or stale result. Unless you have measured your replication lag and have a specific reason to reduce the interval, leave the default.

6. blocksAsJSON cannot be enabled on a collection that already has relational block data.

Enabling blocksAsJSON on an existing collection does not migrate the existing rows from the relational tables into the JSON column. The existing data simply becomes invisible to Payload's query layer while the relational tables remain in the database. If you want to switch an existing collection to blocksAsJSON, you need a migration that reads the current relational data, serialises it to JSON, writes it to the new column, and then drops the old relational tables. This is non-trivial. The safe time to enable blocksAsJSON is when a collection is first created before it has data, or after a tested migration.

FAQ

Do I need to install drizzle-orm separately?

No. Everything you need — eq, sql, and, or, and the full operator library — re-exports from @payloadcms/db-postgres/drizzle. Adding drizzle-orm as a separate dependency in the same project can cause version conflicts, so prefer the re-export path.

Can I use drizzle-kit alongside Payload?

You should not. Payload manages Drizzle's migration system internally — payload migrate:create generates the migration files, and payload migrate runs them. Introducing drizzle-kit into the same project creates two independent migration histories against the same database, which quickly produces conflicts. Use Payload's migration commands exclusively.

Does querying via payload.db.drizzle bypass Payload hooks?

Yes, completely. payload.db.drizzle is the raw database client. Reads and writes through it skip the full Payload document lifecycle — no beforeOperation, afterOperation, beforeChange, afterChange, or access control. For reads, this is usually fine and often desirable. For writes, be deliberate: any denormalised data your hooks normally maintain will not be updated automatically.

Should my team use push or migrations locally?

Push is faster for solo development — it keeps your local database in sync without any manual steps. In a team, push works well as long as every developer runs against their own local database instance. The migration files live in source control and are the source of truth for staging and production. Commit migration files alongside the Payload config changes that generated them, and run payload migrate when pulling someone else's migration.

How do I run a raw query inside a migration's transaction?

Use the db argument from MigrateUpArgs together with the sql template:

import { type MigrateUpArgs, sql } from '@payloadcms/db-postgres'

export async function up({ db }: MigrateUpArgs) {
  const { rows } = await db.execute(sql`SELECT id FROM posts WHERE status = 'draft'`)
  // rows is typed as Record<string, unknown>[]
}

The db client is already connected to the migration's transaction. Anything you execute through it participates in the same atomic operation. If the migration fails after this point, this query rolls back too.

Conclusion

@payloadcms/db-postgres is a lot more than a connection string wrapper. The adapter exposes a complete Drizzle client at payload.db.drizzle the moment Payload initialises, which you can use for custom reads, raw SQL, relational joins, and queries that the Payload API cannot express. Schema hooks give you a way to extend the database with tables and columns outside of Payload's collection model, without touching migration files. And migrations — generated in TypeScript, each wrapping your up function in a transaction automatically — are the only supported path for production schema changes.

The gotchas section exists because these are the things that cost real time in real projects. The import path matters. Column renames in push mode are destructive. payload.db.tables uses snake_case with adapter suffixes. beforeSchemaInit tables need adapter.rawTables to appear in the generated schema. readReplicasAfterWriteInterval: 0 is a trap. And blocksAsJSON needs a migration to retrofit onto existing data.

The practical starting point: reach for payload.db.drizzle when you need a query Payload's Local API cannot express, use schema hooks when you need custom tables alongside your collections, and keep push strictly local while migrations own everything that ships to production.

Let me know in the comments if you have questions, and subscribe for more practical development guides.

Thanks, Matija

📚 Comprehensive Payload CMS Guides

Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.

No spam. Unsubscribe anytime.

📄View markdown version
0

Frequently Asked Questions

Matija Žiberna
Matija Žiberna
Full-stack developer, co-founder

I'm Matija Žiberna, a self-taught full-stack developer and co-founder passionate about building products, writing clean code, and figuring out how to turn ideas into businesses. I write about web development with Next.js, lessons from entrepreneurship, and the journey of learning by doing. My goal is to provide value through code—whether it's through tools, content, or real-world software.

Table of Contents

  • Installing the adapter
  • Full adapter configuration
  • Push mode vs. migrations
  • How Payload exposes the Drizzle instance
  • Generating the schema file
  • Using payload.db.drizzle for queries
  • Relational query builder
  • Select API with the sql template
  • Executing raw SQL
  • Extending the schema with hooks
  • beforeSchemaInit
  • afterSchemaInit
  • Migrations
  • The migration file structure
  • Migration commands
  • Running migrations at server startup
  • When to reach for raw Drizzle vs. payload.db.*
  • Non-obvious gotchas from real Payload projects
  • FAQ
  • Conclusion
On this page:
  • Installing the adapter
  • Full adapter configuration
  • Push mode vs. migrations
  • How Payload exposes the Drizzle instance
  • Generating the schema file
Build With Matija Logo

Build with Matija

Matija Žiberna

I turn scattered business knowledge into one usable system. End-to-end system architecture, AI integration, and development.

Quick Links

Case Studies
  • Other Projects
  • How I Work
  • Blog
  • RSS Feed
  • Services

    • B2B Website Development
    • Bespoke AI Applications
    • Advisory

    Payload

    • B2B Website Development
    • Payload CMS Developer
    • Audit
    • Migration
    • Pricing
    • Payload vs Sanity
    • Payload vs WordPress
    • Payload vs Strapi
    • Payload vs Contentful

    Industries

    • Manufacturing
    • Construction

    Get in Touch

    Have a project in mind? Let's discuss how we can help your business grow.

    Book a discovery callContact me →
    © 2026BuildWithMatija•Principal-led system architecture•All rights reserved

    Comments

    Leave a Comment

    Your email will not be published

    Stay updated! Get our weekly digest with the latest learnings on NextJS, React, AI, and web development tips delivered straight to your inbox.

    10-2000 characters

    • Comments are automatically approved and will appear immediately

    • Your name and email will be saved for future comments

    • Be respectful and constructive in your feedback

    • No spam, self-promotion, or off-topic content

    No comments yet

    Be the first to share your thoughts on this post!