BuildWithMatija
Get In Touch
  1. Home
  2. Blog
  3. Payload
  4. Sanity to Payload CMS: The Complete 5-Step Migration Guide

Sanity to Payload CMS: The Complete 5-Step Migration Guide

Step-by-step TypeScript scripts to export NDJSON, convert Portable Text to Lexical, migrate assets, map schemas, and…

30th March 2026·Updated on:14th March 2026·MŽMatija Žiberna·
Payload
Sanity to Payload CMS: The Complete 5-Step Migration Guide

Need Help Making the Switch?

Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.

Book Hourly Advisory

Migrating from Sanity to Payload CMS involves five concrete steps: exporting your Sanity dataset as NDJSON, mapping your schemas from Sanity's document model to Payload Collections and Globals, replacing GROQ queries with Payload's Local API, converting Portable Text to Lexical JSON, and downloading and reimporting your media assets. Each step has specific tooling and a few non-obvious gotchas — especially the Portable Text conversion, which is the step that trips most developers up. This guide walks through all five in TypeScript, with working code you can adapt directly.

I migrated two client projects off Sanity over the past year, and neither was particularly painful once I understood the structural differences. The GROQ-to-Payload mental shift is the biggest adjustment. After that, the rest mostly falls into place.

If you're still weighing whether to move at all, the Payload CMS vs Sanity comparison covers the architecture and trade-offs in detail. This guide assumes you've already made the call.


Why Sanity Developers Leave

Being honest about this matters, because Sanity is a genuinely well-built product. The real-time collaboration model is excellent. The Studio customization API is flexible. GROQ is expressive once you know it.

The reasons teams migrate are specific and architectural, not quality failures.

The most common one is pricing. Sanity's per-seat model for the hosted Studio adds up quickly on teams with more than a handful of content editors, and asset storage on the Content Lake compounds the cost as a project grows. Developers who want their data in Postgres — for compliance, for cost control, or simply because they already have an existing database infrastucture — hit a structural ceiling with Sanity's hosted data model.

The second common driver is GROQ itself. It's powerful, but it's a proprietary query language that's hard to onboard junior developers onto, hard to type fully without Sanity TypeGen, and tightly coupled to Sanity's API layer. Developers who've grown comfortable with standard SQL semantics or TypeScript query builders often find GROQ's "magic" frustrating rather than elegant.

The third driver is the Studio separation. Sanity Studio is a separate frontend you deploy independently. Payload's admin UI lives inside your application. For teams that want a single deployment, a single auth model, and a single codebase, that difference matters more than it sounds.

None of these are reasons to call Sanity a bad CMS. They're reasons to consider Payload if your constraints are different.


Conceptual Mapping: Sanity Primitives to Payload Equivalents

Before touching any code, the most important thing to do is map your mental model. Sanity and Payload use different vocabulary for similar ideas, and getting comfortable with the mapping makes every subsequent step easier.

Document Types and Collections

Sanity conceptPayload equivalentNotes
_type: 'document' (multi-instance)CollectionThe direct equivalent. Each document type becomes a Payload Collection.
Singleton documentGlobalPayload Globals are designed for single-instance documents like site settings or nav config.
object (embedded, no _id)Blocks / nested fieldsPayload handles inline objects as field groups or block arrays depending on structure.
referenceRelationship fieldrelationTo in Payload. Supports single and multiple collections.
image with hotspotUpload field with focalPointDirect 1:1 map — hotspot.x → focalPoint.x, hotspot.y → focalPoint.y.
portableText / array of blocksLexical rich text fieldStructural difference — not a 1:1 map. See the converter section below.

Field Types

Sanity field typePayload field typeNotes
stringtextDirect equivalent.
text (multiline)textareaDirect equivalent.
numbernumberDirect equivalent.
booleancheckboxDirect equivalent.
slugtext with custom validation, or text with unique constraintPayload doesn't have a native slug field type, but a text field with a unique: true constraint replicates the behavior cleanly.
array of stringsarray field with text row
array of objectsarray field with named fields
referenceRelationship fieldrelationTo: 'collection-slug'
imageUpload fieldWire to your upload collection.

The Studio vs Embedded Admin

This is worth flagging explicitly before you touch content editors. Sanity Studio is a separate React application that you deploy to its own URL (or host on studio.sanity.io). It has its own auth, its own configuration, and its own deployment pipeline.

Payload's admin UI is embedded in your Next.js application. It runs on the same server, under the same domain, typically at /admin. There is no separate Studio to deploy. Content editors log in at your application URL. For most teams this simplifies things considerably — one deployment, one auth system, one codebase.

The implications for editors are minor: same browser, different URL. The implications for developers are significant: you configure the admin UI through your Payload config in TypeScript, not through a separate Studio config file.


Replacing GROQ: Three Real Query Examples

This is the section most Sanity developers spend the most time on, and it's the biggest mental shift in the migration. GROQ feels like JavaScript array filtering — compact, expression-based, with a functional style. Payload's query API feels more like a typed SQL query builder — explicit, composable, and more verbose but easier to reason about in a team setting.

One quick note on API choice before the examples: Payload has two query surfaces. The Local API (payload.find(...)) runs server-side without HTTP overhead and is ideal for migration scripts, Next.js Server Components, and any server-side data fetching. The REST API (GET /api/collection-slug) is for external clients. In migration scripts, always use the Local API.

For more background on GROQ specifically, the GROQ vs GraphQL guide covers the query language in depth.

Example 1: Fetching a List with a Populated Reference

In Sanity, you expand a reference with the -> operator:

// GROQ
*[_type == "post"]{ title, author->{ name } }

In Payload, relationship population is controlled by the depth parameter. Setting depth: 1 tells Payload to populate one level of relationships — the equivalent of a single -> in GROQ:

// File: scripts/migrate/fetch-posts.ts
import { getPayloadClient } from '@/lib/payload'

const payload = await getPayloadClient({ seed: false })

const posts = await payload.find({
  collection: 'posts',
  depth: 1, // Populates the author relationship one level deep
})

// posts.docs[0].author is now a fully populated object: { id, name, ... }
// Without depth: 1, author would be just a string ID

For the REST API equivalent: GET /api/posts?depth=1

The depth parameter cascades — depth: 2 would populate relationships inside the populated relationship, equivalent to author->{ name, team->{ name } } in GROQ.

Example 2: Filtering by Field Value

In Sanity, you filter inline in the query expression:

// GROQ
*[_type == "post" && published == true][0..9]

In Payload, filtering happens through a where object with an explicit operator:

// File: scripts/migrate/fetch-published.ts
const publishedPosts = await payload.find({
  collection: 'posts',
  where: {
    published: {
      equals: true,
    },
  },
  limit: 10,
})

For the REST API equivalent: GET /api/posts?where[published][equals]=true&limit=10

Payload supports a full set of operators in where: equals, not_equals, greater_than, less_than, like, contains, exists, in, not_in. For compound conditions, use and and or arrays:

// File: scripts/migrate/fetch-filtered.ts
const posts = await payload.find({
  collection: 'posts',
  where: {
    and: [
      { published: { equals: true } },
      { category: { equals: 'engineering' } },
    ],
  },
})

This maps to GROQ's && operator. Payload's form is more explicit, but it's fully type-safe and composable — you can build the where object dynamically without string interpolation.

Example 3: Ordering and Pagination

In Sanity, ordering and slicing use a pipe syntax:

// GROQ
*[_type == "post"] | order(publishedAt desc)[0..4]

In Payload, sort direction is controlled by a - prefix, and pagination uses limit and page:

// File: scripts/migrate/fetch-paginated.ts
const latestPosts = await payload.find({
  collection: 'posts',
  sort: '-publishedAt', // '-' prefix means descending
  limit: 5,
  page: 1,
})

// Response shape:
// { docs: [...], totalDocs: 42, totalPages: 9, page: 1, hasNextPage: true }

For the REST API equivalent: GET /api/posts?sort=-publishedAt&limit=5&page=1

The key difference from GROQ's slice syntax ([0..4]) is that Payload thinks in pages, not offsets. If you're used to building cursor-based pagination from GROQ's slice arithmetic, you'll want to switch to page-based navigation in Payload. The response always includes totalDocs, totalPages, hasNextPage, and hasPrevPage, which makes UI pagination straightforward without extra queries.

You can find the full query operator reference in Payload's queries documentation and the Local API reference at payloadcms.com/docs/local-api/overview.


Exporting Your Sanity Dataset

Sanity's CLI provides a first-class export command that produces a full NDJSON dump of your content:

# Terminal
sanity dataset export [dataset-name] ./export --token YOUR_TOKEN

Replace [dataset-name] with production (or whatever your dataset is named). The --token flag accepts a Sanity API token with read access. The command produces a .tar.gz archive containing one NDJSON file — one JSON document per line.

After extracting the archive, each line of the NDJSON file looks something like this:

// export/production.ndjson (one line, expanded for readability)
{
  "_id": "abc123",
  "_type": "post",
  "_createdAt": "2025-01-15T10:00:00Z",
  "_updatedAt": "2025-01-20T14:30:00Z",
  "title": "My First Post",
  "slug": { "_type": "slug", "current": "my-first-post" },
  "published": true,
  "author": { "_type": "reference", "_ref": "authorDocId456" },
  "body": [
    {
      "_key": "abc",
      "_type": "block",
      "style": "normal",
      "children": [{ "_key": "xyz", "_type": "span", "text": "Hello world", "marks": [] }],
      "markDefs": []
    }
  ]
}

Two things to note about the export format. First, reference fields are stored as objects with a _ref property pointing to the _id of the referenced document — not the full document. This means when you import into Payload, you need to maintain an ID mapping (old Sanity _id → new Payload document ID) and resolve references after importing all documents. Import in dependency order: authors before posts, categories before articles, parents before children.

Second, the export contains content documents only, not binary assets. Images and files are referenced by their asset _id, but the binaries live on Sanity's CDN. The asset migration section below covers downloading and reimporting those.


Portable Text to Lexical: A Working TypeScript Converter

This is the most technically involved step in the migration, and the one with the least existing tooling. Portable Text and Lexical are both JSON representations of rich text, but they use fundamentally different structures.

Portable Text is a flat array of block objects. Each block represents a paragraph, heading, or other element. Inline formatting (bold, italic, links) is stored separately in markDefs, and each span references its marks by key:

// Portable Text structure
[
  {
    "_type": "block",
    "style": "h2",
    "children": [
      { "_type": "span", "text": "Hello ", "marks": [] },
      { "_type": "span", "text": "world", "marks": ["strong"] }
    ],
    "markDefs": []
  },
  {
    "_type": "block",
    "style": "normal",
    "children": [
      { "_type": "span", "text": "Click ", "marks": [] },
      { "_type": "span", "text": "here", "marks": ["link1"] }
    ],
    "markDefs": [{ "_key": "link1", "_type": "link", "href": "https://example.com" }]
  }
]

Lexical is a recursive tree. Formatting is expressed as properties directly on text nodes, and marks are not separated from content:

// Lexical structure (Payload v3)
{
  "root": {
    "type": "root",
    "children": [
      {
        "type": "heading",
        "tag": "h2",
        "children": [
          { "type": "text", "text": "Hello ", "format": 0 },
          { "type": "text", "text": "world", "format": 1 }
        ]
      },
      {
        "type": "paragraph",
        "children": [
          { "type": "text", "text": "Click ", "format": 0 },
          {
            "type": "link",
            "url": "https://example.com",
            "children": [{ "type": "text", "text": "here", "format": 0 }]
          }
        ]
      }
    ]
  }
}

The format field on text nodes is a bitmask: 0 is plain, 1 is bold, 2 is italic, 3 is bold + italic. This is the key structural difference from Portable Text's array-of-marks approach.

Here is a working TypeScript converter that handles paragraphs, headings, bold, italic, links, and unordered lists. Custom block types (Sanity _type values that aren't block) are logged and skipped — you'll need to add handlers for any project-specific types:

// File: scripts/migrate/portable-text-to-lexical.ts

type PortableTextSpan = {
  _type: 'span'
  _key: string
  text: string
  marks: string[]
}

type PortableTextMarkDef = {
  _key: string
  _type: string
  href?: string
}

type PortableTextBlock = {
  _type: 'block'
  _key: string
  style: string
  children: PortableTextSpan[]
  markDefs: PortableTextMarkDef[]
  listItem?: 'bullet' | 'number'
  level?: number
}

type LexicalTextNode = {
  type: 'text'
  text: string
  format: number
  version: 1
}

type LexicalLinkNode = {
  type: 'link'
  url: string
  children: LexicalTextNode[]
  version: 1
}

type LexicalParagraphNode = {
  type: 'paragraph'
  children: (LexicalTextNode | LexicalLinkNode)[]
  version: 1
}

type LexicalHeadingNode = {
  type: 'heading'
  tag: 'h1' | 'h2' | 'h3' | 'h4' | 'h5' | 'h6'
  children: (LexicalTextNode | LexicalLinkNode)[]
  version: 1
}

type LexicalListItemNode = {
  type: 'listitem'
  children: LexicalTextNode[]
  version: 1
  value: number
}

type LexicalListNode = {
  type: 'list'
  listType: 'bullet' | 'number'
  children: LexicalListItemNode[]
  version: 1
  start: 1
  tag: 'ul' | 'ol'
}

type LexicalNode =
  | LexicalParagraphNode
  | LexicalHeadingNode
  | LexicalListNode

// Format bitmask values used by Lexical
const FORMAT = {
  PLAIN: 0,
  BOLD: 1,
  ITALIC: 2,
  BOLD_ITALIC: 3,
  UNDERLINE: 8,
  STRIKETHROUGH: 4,
  CODE: 16,
} as const

function getTextFormat(marks: string[]): number {
  let format = FORMAT.PLAIN
  if (marks.includes('strong')) format |= FORMAT.BOLD
  if (marks.includes('em')) format |= FORMAT.ITALIC
  if (marks.includes('underline')) format |= FORMAT.UNDERLINE
  if (marks.includes('strike-through')) format |= FORMAT.STRIKETHROUGH
  if (marks.includes('code')) format |= FORMAT.CODE
  return format
}

function convertSpans(
  spans: PortableTextSpan[],
  markDefs: PortableTextMarkDef[],
): (LexicalTextNode | LexicalLinkNode)[] {
  const nodes: (LexicalTextNode | LexicalLinkNode)[] = []

  for (const span of spans) {
    // Find any link mark in this span
    const linkMark = span.marks.find((mark) =>
      markDefs.some((def) => def._key === mark && def._type === 'link'),
    )

    if (linkMark) {
      const def = markDefs.find((d) => d._key === linkMark)
      const nonLinkMarks = span.marks.filter((m) => m !== linkMark)
      nodes.push({
        type: 'link',
        url: def?.href ?? '#',
        children: [
          {
            type: 'text',
            text: span.text,
            format: getTextFormat(nonLinkMarks),
            version: 1,
          },
        ],
        version: 1,
      })
    } else {
      nodes.push({
        type: 'text',
        text: span.text,
        format: getTextFormat(span.marks),
        version: 1,
      })
    }
  }

  return nodes
}

function convertBlock(block: PortableTextBlock): LexicalNode | null {
  const headingStyles = ['h1', 'h2', 'h3', 'h4', 'h5', 'h6']

  // List items — caller is responsible for grouping these
  if (block.listItem === 'bullet' || block.listItem === 'number') {
    // Handled by grouping logic in the main converter
    return null
  }

  if (headingStyles.includes(block.style)) {
    return {
      type: 'heading',
      tag: block.style as 'h1' | 'h2' | 'h3' | 'h4' | 'h5' | 'h6',
      children: convertSpans(block.children, block.markDefs),
      version: 1,
    }
  }

  // Default: paragraph (includes style: 'normal', 'blockquote', etc.)
  return {
    type: 'paragraph',
    children: convertSpans(block.children, block.markDefs),
    version: 1,
  }
}

export function portableTextToLexical(
  portableText: PortableTextBlock[],
): object {
  const children: LexicalNode[] = []
  let i = 0

  while (i < portableText.length) {
    const block = portableText[i]

    if (block._type !== 'block') {
      // Custom block type — add a handler here for project-specific types
      console.warn(`Skipping unsupported block type: ${block._type}`)
      i++
      continue
    }

    // Group consecutive list items into a single list node
    if (block.listItem) {
      const listType = block.listItem
      const listItems: LexicalListItemNode[] = []
      let itemValue = 1

      while (
        i < portableText.length &&
        portableText[i]._type === 'block' &&
        portableText[i].listItem === listType
      ) {
        const listBlock = portableText[i] as PortableTextBlock
        listItems.push({
          type: 'listitem',
          children: convertSpans(listBlock.children, listBlock.markDefs).filter(
            (n): n is LexicalTextNode => n.type === 'text',
          ),
          version: 1,
          value: itemValue++,
        })
        i++
      }

      children.push({
        type: 'list',
        listType,
        children: listItems,
        version: 1,
        start: 1,
        tag: listType === 'bullet' ? 'ul' : 'ol',
      })
      continue
    }

    const converted = convertBlock(block)
    if (converted) children.push(converted)
    i++
  }

  return {
    root: {
      type: 'root',
      children,
      direction: 'ltr',
      format: '',
      indent: 0,
      version: 1,
    },
  }
}

To use this during import, call it on the body (or whatever your Portable Text field is named) of each exported document before writing to Payload:

// File: scripts/migrate/import-posts.ts
import { portableTextToLexical } from './portable-text-to-lexical'
import fs from 'fs'

const lines = fs.readFileSync('./export/production.ndjson', 'utf-8').trim().split('\n')

for (const line of lines) {
  const doc = JSON.parse(line)
  if (doc._type !== 'post') continue

  const lexicalBody = portableTextToLexical(doc.body ?? [])

  await payload.create({
    collection: 'posts',
    data: {
      title: doc.title,
      slug: doc.slug?.current,
      published: doc.published ?? false,
      body: lexicalBody,
      // ... other fields
    },
  })
}

Test this converter on a representative sample of your content before running the full import. Portable Text is a generic spec, but every Sanity project tends to accumulate custom block types and marks over time. The console.warn on unknown types will tell you exactly what custom handlers you need to add.


Image and Asset Migration

The Sanity NDJSON export contains image references but not the image binaries themselves. A reference in the export looks like this:

{
  "mainImage": {
    "_type": "image",
    "asset": {
      "_type": "reference",
      "_ref": "image-abc123def456-1200x800-jpg"
    },
    "hotspot": { "x": 0.5, "y": 0.3, "width": 1, "height": 1 }
  }
}

The asset _ref encodes the full filename in the format image-{assetId}-{dimensions}-{format}. You can construct the Sanity CDN URL from this:

https://cdn.sanity.io/images/{projectId}/{dataset}/{assetId}-{dimensions}.{format}

For example, image-abc123def456-1200x800-jpg downloads from: https://cdn.sanity.io/images/yourProjectId/production/abc123def456-1200x800.jpg

The following script downloads each asset from Sanity's CDN and uploads it to Payload's upload collection. It also migrates the hotspot to Payload's focalPoint — the mapping is a direct 1:1 since both use percentage-based x and y values:

// File: scripts/migrate/migrate-assets.ts
import fs from 'fs'
import path from 'path'
import fetch from 'node-fetch'
import FormData from 'form-data'
import { getPayloadClient } from '@/lib/payload'

const SANITY_PROJECT_ID = process.env.SANITY_PROJECT_ID!
const SANITY_DATASET = process.env.SANITY_DATASET ?? 'production'

// Parses a Sanity asset _ref into its components
function parseAssetRef(ref: string): { assetId: string; dimensions: string; format: string } | null {
  const match = ref.match(/^image-([a-f0-9]+)-(\d+x\d+)-(\w+)$/)
  if (!match) return null
  return { assetId: match[1], dimensions: match[2], format: match[3] }
}

function buildSanityCdnUrl(ref: string): string | null {
  const parsed = parseAssetRef(ref)
  if (!parsed) return null
  return `https://cdn.sanity.io/images/${SANITY_PROJECT_ID}/${SANITY_DATASET}/${parsed.assetId}-${parsed.dimensions}.${parsed.format}`
}

export async function migrateAssets(ndjsonPath: string): Promise<Map<string, string>> {
  // Returns a map of Sanity asset _ref → Payload media document ID
  const payload = await getPayloadClient({ seed: false })
  const idMap = new Map<string, string>()

  const lines = fs.readFileSync(ndjsonPath, 'utf-8').trim().split('\n')
  const assetDocs = lines
    .map((line) => JSON.parse(line))
    .filter((doc) => doc._type === 'sanity.imageAsset')

  for (const asset of assetDocs) {
    const cdnUrl = buildSanityCdnUrl(asset._id)
    if (!cdnUrl) {
      console.warn(`Could not parse asset ref: ${asset._id}`)
      continue
    }

    try {
      const response = await fetch(cdnUrl)
      if (!response.ok) {
        console.error(`Failed to fetch asset ${asset._id}: ${response.status}`)
        continue
      }

      const buffer = await response.buffer()
      const filename = path.basename(cdnUrl)
      const mimeType = `image/${asset.extension === 'jpg' ? 'jpeg' : asset.extension}`

      const form = new FormData()
      form.append('file', buffer, { filename, contentType: mimeType })

      const created = await payload.create({
        collection: 'media',
        data: {
          alt: asset.altText ?? '',
          focalPoint: asset.hotspot
            ? { x: asset.hotspot.x, y: asset.hotspot.y }
            : { x: 0.5, y: 0.5 },
        },
        file: {
          data: buffer,
          mimetype: mimeType,
          name: filename,
          size: buffer.length,
        },
      })

      idMap.set(asset._id, created.id as string)
      console.log(`Migrated: ${filename} → ${created.id}`)
    } catch (err) {
      console.error(`Error migrating ${asset._id}:`, err)
    }
  }

  return idMap
}

Run migrateAssets first and save the returned ID map. You'll use it when importing content documents to resolve image references from Sanity _ref strings to Payload media IDs.


Rebuilding the Studio in Payload

The Sanity Studio is a separate React application with its own routing, component system, and configuration. When you move to Payload, that application goes away entirely. The admin UI is built into your Payload application and runs at /admin.

For content editors, the practical change is minimal — they get a new URL and a slightly different interface. For developers, the configuration shifts from Studio-specific APIs to Payload's TypeScript config.

Custom Input Components

Sanity lets you replace any field's input with a custom React component. Payload does the same through its admin.components system. The Payload CMS admin UI custom components guide covers this in detail, including the @payloadcms/ui component library you can use to match the native admin styling.

Desk Structure → Collection Groups

Sanity Studio's Structure Builder lets you organize documents into custom navigation trees with S.documentTypeList, S.list, and S.listItem. Payload handles navigation organization through collection admin.group config:

// File: src/collections/Posts.ts
import { CollectionConfig } from 'payload'

export const Posts: CollectionConfig = {
  slug: 'posts',
  admin: {
    group: 'Content', // Groups collections under a "Content" nav heading
    useAsTitle: 'title',
    defaultColumns: ['title', 'published', 'updatedAt'],
  },
  fields: [
    // ... your fields
  ],
}

The group property collapses multiple collections under a shared nav label in the sidebar, which approximates the Desk Structure grouping pattern. For more complex custom sidebar navigation, Payload's admin.components.Nav override lets you replace the navigation entirely.

Field-Level Validation and Hooks

Sanity's custom validation functions (rule => rule.required().custom(...)) map to Payload's validate function on each field:

// File: src/collections/Posts.ts
{
  name: 'slug',
  type: 'text',
  required: true,
  validate: (val: string | undefined) => {
    if (!val) return 'Slug is required'
    if (!/^[a-z0-9-]+$/.test(val)) return 'Slug must be lowercase alphanumeric with hyphens only'
    return true
  },
}

Sanity's initialValue maps to Payload's defaultValue. Sanity's document-level hooks (prepare, onChange) map to Payload's collection-level hooks (beforeChange, afterChange, beforeRead). The Payload CMS SDK and CLI toolkit covers the hooks API in more detail for scripting and automation use cases.


Full Migration Checklist

Before you run the import scripts, run through this sequence:

  1. Export dataset: sanity dataset export production ./export --token YOUR_TOKEN
  2. Extract the archive and inspect the NDJSON to confirm field names match your schema expectations
  3. Create Payload collections that mirror each Sanity _type you're migrating
  4. Run the asset migration script and save the ID map
  5. Import documents in dependency order (no references first, then references)
  6. Resolve Sanity _ref strings to Payload IDs using your saved maps
  7. Verify Portable Text conversion on a sample of rich text documents before bulk import
  8. Test admin UI access and confirm editors can find their content

FAQ

Can I run Payload and Sanity in parallel during the migration?

Yes, and it's the approach I'd recommend for any project where content is actively being edited. Migrate the historical data first, then set up a brief freeze window where editors stop publishing in Sanity while you run the final import and cut over the frontend. Trying to sync live edits across both systems simultaneously adds significant complexity.

What happens to Sanity's _id field in Payload?

Sanity uses ULID-style string IDs. Payload (on Postgres or MongoDB) uses its own ID scheme. You cannot preserve Sanity IDs in Payload directly. The migration script needs to maintain a map of old Sanity _id → new Payload ID and resolve all _ref fields after the initial import.

Do I need to handle _rev and _createdAt fields from the export?

_rev is Sanity's internal revision field — you can discard it. _createdAt and _updatedAt can be mapped to Payload's createdAt and updatedAt fields if you need to preserve original timestamps. Pass them in the data object when creating documents via the Local API.

How do I handle Sanity's array of block types that mix standard blocks and custom types?

The converter above logs and skips unknown _type values. For each custom block type in your project (image galleries, callouts, video embeds, etc.), add a branch to portableTextToLexical that converts them to the appropriate Lexical node structure. If there's no direct Lexical equivalent, a custom Lexical node defined in your Payload Lexical config is the cleanest path.

Is Payload's Local API safe to use in a long-running migration script?

Yes. The Local API bypasses HTTP entirely and runs in the same Node process as Payload. For large datasets, add a small delay between creates (await new Promise(r => setTimeout(r, 50))) to avoid overwhelming your database connection pool, and run the script with a reasonable --max-old-space-size Node flag if you're processing thousands of documents.


Conclusion

Moving from Sanity to Payload is a structured process once you understand the five pieces: exporting the dataset, mapping schemas, replacing GROQ queries, converting Portable Text to Lexical, and migrating assets. The GROQ replacement is the most conceptual shift, but Payload's query API is fully typed and composable in ways GROQ isn't. The Portable Text converter is the most technically involved piece, and the one most likely to need customization for your specific project's block types.

The /payload-cms-migration hub has more context on migration paths from other CMSs if you're coordinating a larger platform consolidation.

Let me know in the comments if you run into edge cases with your specific Portable Text schema, and subscribe for more practical Payload guides.

Thanks, Matija

📚 Comprehensive Payload CMS Guides

Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.

No spam. Unsubscribe anytime.

📄View markdown version
0

Frequently Asked Questions

Comments

Leave a Comment

Your email will not be published

Stay updated! Get our weekly digest with the latest learnings on NextJS, React, AI, and web development tips delivered straight to your inbox.

10-2000 characters

• Comments are automatically approved and will appear immediately

• Your name and email will be saved for future comments

• Be respectful and constructive in your feedback

• No spam, self-promotion, or off-topic content

Matija Žiberna
Matija Žiberna
Full-stack developer, co-founder

I'm Matija Žiberna, a self-taught full-stack developer and co-founder passionate about building products, writing clean code, and figuring out how to turn ideas into businesses. I write about web development with Next.js, lessons from entrepreneurship, and the journey of learning by doing. My goal is to provide value through code—whether it's through tools, content, or real-world software.

Table of Contents

  • Why Sanity Developers Leave
  • Conceptual Mapping: Sanity Primitives to Payload Equivalents
  • Document Types and Collections
  • Field Types
  • The Studio vs Embedded Admin
  • Replacing GROQ: Three Real Query Examples
  • Example 1: Fetching a List with a Populated Reference
  • Example 2: Filtering by Field Value
  • Example 3: Ordering and Pagination
  • Exporting Your Sanity Dataset
  • Portable Text to Lexical: A Working TypeScript Converter
  • Image and Asset Migration
  • Rebuilding the Studio in Payload
  • Custom Input Components
  • Desk Structure → Collection Groups
  • Field-Level Validation and Hooks
  • Full Migration Checklist
  • FAQ
  • Conclusion
On this page:
  • Why Sanity Developers Leave
  • Conceptual Mapping: Sanity Primitives to Payload Equivalents
  • Replacing GROQ: Three Real Query Examples
  • Exporting Your Sanity Dataset
  • Portable Text to Lexical: A Working TypeScript Converter
Build With Matija Logo

Build with Matija

Matija Žiberna

I turn scattered business knowledge into one usable system. End-to-end system architecture, AI integration, and development.

Quick Links

Projects
  • How I Work
  • Blog
  • RSS Feed
  • Services

    • Payload CMS Websites
    • Bespoke AI Applications
    • Advisory

    Payload

    • Payload CMS Websites
    • Payload CMS Developer
    • Audit
    • Migration
    • Pricing
    • Payload vs Sanity
    • Payload vs WordPress
    • Payload vs Strapi
    • Payload vs Contentful

    Industries

    • Manufacturing
    • Construction

    Get in Touch

    Have a project in mind? Let's discuss how we can help your business grow.

    Book a discovery callContact me →
    © 2026BuildWithMatija•Principal-led system architecture•All rights reserved