BuildWithMatija
Get In Touch
  1. Home
  2. Blog
  3. Payload CMS Concurrency Keys: Prevent Race Conditions

Payload CMS Concurrency Keys: Prevent Race Conditions

How to enable concurrency control, use exclusive vs. supersedes modes, and scope keys for multi-tenant Payload jobs.

28th March 2026·Updated on:25th March 2026·MŽMatija Žiberna
Early Access

You are viewing this article before its public release.

This goes live on March 28, 2026 at 7:00 AM.

Payload CMS Concurrency Keys: Prevent Race Conditions

📚 Get Practical Development Guides

Join developers getting comprehensive guides, code examples, optimization tips, and time-saving prompts to accelerate their development workflow.

No spam. Unsubscribe anytime.

Payload CMS Concurrency Keys: Preventing Race Conditions Across Multiple Workers

When you run multiple Payload workers polling the same queue, they can pick up and execute different jobs in parallel — that is the whole point of multiple workers. The problem appears when two workers pick up two jobs that both touch the same resource at the same time. A concurrency key is the mechanism Payload gives you to prevent that. It serializes jobs that share a key while letting jobs for different resources keep running in parallel. This article covers how concurrency keys work, when to use them, and the two modes — exclusive and supersedes — that control what happens to jobs waiting in line.

In Payload CMS Jobs: Separate Web & Worker Roles for Safe Scale, I covered why separating your web and worker runtimes is the right architecture for handling background work at scale. That article introduced concurrency keys briefly — what they are and why they matter with multiple workers. This article goes into the implementation: the config flag you need to enable, the migration it requires, how exclusive and supersedes modes behave, and how to scope keys correctly in a multi-tenant setup.

I built a multi-tenant import pipeline where each tenant had a dedicated import queue and a shared worker pool. Everything worked in development with a single worker. In staging with three workers, imports for the same tenant were occasionally running in parallel, and rows were being written twice. The fix was a concurrency key scoped to the tenant ID, which forced same-tenant jobs to serialize while leaving cross-tenant jobs fully parallel.

Why multiple workers create this problem

A single worker processes jobs one at a time by definition. Race conditions cannot happen because there is no parallelism. When you add a second worker, both workers poll the queue independently. Payload does not automatically know that job A and job B are related — it just sees two pending jobs and assigns each to an available worker.

If both jobs touch the same document, the same tenant's database rows, or the same external API account, they are now running in parallel against shared state. Depending on what they do, you get silent data corruption, duplicate writes, or conflicting updates where the last writer wins and earlier work is lost.

Concurrency keys solve this at the job coordination layer, before execution starts.

Enabling concurrency controls

Concurrency control is opt-in and requires a config flag and a database migration:

// File: payload.config.ts
import { buildConfig } from 'payload'

export default buildConfig({
  jobs: {
    enableConcurrencyControl: true,
    tasks: [...],
  },
})

Setting enableConcurrencyControl: true adds an indexed concurrencyKey field to the payload-jobs collection schema. If you have an existing jobs collection, Payload will require a migration to add this field. Run your migrations before deploying workers that rely on concurrency keys — a worker running without the field indexed will not enforce concurrency correctly.

How a concurrency key works

A concurrency key is a string you derive from a job's input. Jobs that produce the same key are guaranteed to run one at a time. Jobs that produce different keys are not affected by each other and can still run in parallel.

When you queue a job, Payload computes the key from the job's input and stores it on the job document. When a worker polls for pending jobs, Payload excludes any job whose key is already being processed by another worker. If the same batch of pending jobs contains multiple jobs with identical keys — which can happen when the queue is backed up — only the first one by creation order runs. The rest are released back to processing: false and picked up on the next poll.

The result is that same-key jobs are serialized across the entire worker pool, regardless of how many workers are running.

Defining a concurrency key

The concurrency option goes on your task or workflow definition, not on the individual queue call:

// File: src/jobs/tasks/run-import.ts
import { TaskConfig } from 'payload'

export const runImportTask: TaskConfig<'runImport'> = {
  slug: 'runImport',
  inputSchema: [
    { name: 'tenantId', type: 'text', required: true },
    { name: 'importRunId', type: 'text', required: true },
  ],
  concurrency: ({ input }) => `import:${input.tenantId}`,
  handler: async ({ req, input }) => {
    // This runs exclusively per tenantId.
    // Two workers will never process runImport jobs
    // for the same tenant at the same time.
  },
}

The function receives the job's input and returns a string. The string is the key. For a multi-tenant import, import:${input.tenantId} means all import jobs for tenant A serialize, all import jobs for tenant B serialize, and tenant A and tenant B imports can run in parallel across different workers.

For more granular control — one import at a time per tenant per external source — extend the key:

concurrency: ({ input }) => `import:${input.tenantId}:${input.sourceId}`,

Exclusive and supersedes

The shorthand function syntax sets exclusive: true and supersedes: false by default. The full configuration object exposes both options:

concurrency: {
  key: ({ input }) => `import:${input.tenantId}`,
  exclusive: true,
  supersedes: false,
}

exclusive: true means jobs with the same key run one at a time. All jobs are preserved and will eventually execute — they just wait their turn. This is the right mode for imports, syncs, or any operation where every job represents real work that must complete.

supersedes: true means that when a new job is queued with the same key, Payload deletes any older pending (not yet running) jobs with that key. Only the newest job runs. Jobs that are already running complete normally.

concurrency: {
  key: ({ input }) => `generate-embeddings:${input.documentId}`,
  exclusive: true,
  supersedes: true,
}

The supersedes pattern fits regeneration jobs — embeddings, thumbnails, search index updates — where intermediate states are irrelevant and only the latest version of the document needs to be processed. If a document is edited five times in quick succession and five regeneration jobs are queued, you only need the last one to run. The four intermediate jobs represent work that would be immediately overwritten anyway.

Here is what the two modes look like in practice:

ModeAll jobs runParallel blockedUse case
exclusive: true, supersedes: falseYesYesImports, syncs, ordered processing
exclusive: true, supersedes: trueNo — older pending jobs deletedYesRegeneration, refresh, last-state-wins

Queue-scoped keys

By default, concurrency is global across all queues. A job with key import:tenant-123 in the imports queue blocks a job with the same key in the default queue. If you want concurrency to be queue-specific — so the same tenant can have one import per queue without them blocking each other — include the queue name in the key:

concurrency: {
  key: ({ input, queue }) => `${queue}:import:${input.tenantId}`,
}

The queue argument is passed into the key function alongside input. Including it scopes the key to the queue, so a job in imports and a job in priority-imports for the same tenant no longer block each other.

What a concurrency key does not do

A concurrency key serializes jobs that share a key. It does not limit total throughput or cap how many jobs run across the system. If you have 1,000 tenants and 1,000 workers, all 1,000 imports can run simultaneously — one per tenant, which is exactly right. The key scopes the restriction to the resource dimension you care about.

If you need a global throughput limit — for example, to protect a database that cannot handle more than five concurrent writes regardless of tenant — combine concurrency keys with worker count limits and queue design. A concurrency key alone is not a global rate limiter.

Also note that supersedes only removes pending jobs, never running ones. If a job is mid-execution when a new one is queued with the same key and supersedes: true, the running job completes normally and the new one waits for it.

Full example: multi-tenant import with concurrency

// File: src/jobs/tasks/run-import.ts
import { TaskConfig } from 'payload'

export const runImportTask: TaskConfig<'runImport'> = {
  slug: 'runImport',
  inputSchema: [
    { name: 'tenantId', type: 'text', required: true },
    { name: 'importRunId', type: 'text', required: true },
  ],
  concurrency: {
    key: ({ input }) => `import:${input.tenantId}`,
    exclusive: true,
    supersedes: false, // every import run must complete
  },
  handler: async ({ req, input }) => {
    const { tenantId, importRunId } = input

    await req.payload.update({
      collection: 'import-runs',
      id: importRunId,
      data: { status: 'running' },
      req,
    })

    try {
      // batched transactional import logic here
      // (see the previous article on payload.db.* and Drizzle)

      await req.payload.update({
        collection: 'import-runs',
        id: importRunId,
        data: { status: 'completed' },
        req,
      })
    } catch (err) {
      await req.payload.update({
        collection: 'import-runs',
        id: importRunId,
        data: {
          status: 'failed',
          errorLog: err instanceof Error ? err.message : String(err),
        },
        req,
      })

      throw err
    }
  },
}

With this configuration, five workers running simultaneously will each pick up import jobs for different tenants. If two jobs for the same tenant are in the queue, one runs and the other waits — regardless of which worker picked them up.

FAQ

Do I need a migration to enable concurrency keys on an existing project?

Yes. Setting enableConcurrencyControl: true adds a concurrencyKey field to the jobs collection schema. If you have an existing payload-jobs table in PostgreSQL, Payload will require a migration to add and index this column. Run the migration before deploying workers that depend on concurrency enforcement.

Can I set different concurrency configs on different tasks in the same workflow?

Concurrency is set at the task or workflow level, not on individual inline tasks within a workflow. For fine-grained control inside a workflow, split the logic into separate tasks with their own concurrency keys and chain them.

Does supersedes delete jobs that are currently running?

No. Only pending jobs (not yet running) are deleted when a newer job with the same key is queued and supersedes: true. A running job always completes normally. The new job then waits for the running job to finish before starting.

What happens if the key function throws an error?

If the key function throws, the job will not be queued with a concurrency key. Depending on Payload's error handling at queue time, this may surface as a queuing error or silently fall back to no concurrency enforcement. Keep key functions simple — string interpolation from validated input fields rather than logic that can fail.

How do concurrency keys interact with the retry system?

When a job fails and is retried, it is re-queued with the same concurrency key. The retry job goes through the same concurrency enforcement as any new job. If another job with the same key is already running, the retry waits its turn.

Conclusion

Concurrency keys are the right tool when multiple workers can pick up jobs that touch the same resource and parallel execution of those jobs would cause race conditions, duplicate writes, or conflicting updates. The exclusive mode guarantees sequential execution for all jobs sharing a key. The supersedes mode additionally clears the backlog and runs only the newest job — useful when intermediate states are irrelevant and only the latest matters. Including the queue name in the key makes concurrency queue-scoped rather than global.

For multi-worker Payload deployments, concurrency keys are the difference between "imports occasionally corrupt each other" and "same-resource jobs always serialize, everything else runs in parallel."

Let me know in the comments if you have questions, and subscribe for more practical development guides.

Thanks, Matija

📄View markdown version
0

Frequently Asked Questions

Comments

Leave a Comment

Your email will not be published

Stay updated! Get our weekly digest with the latest learnings on NextJS, React, AI, and web development tips delivered straight to your inbox.

10-2000 characters

• Comments are automatically approved and will appear immediately

• Your name and email will be saved for future comments

• Be respectful and constructive in your feedback

• No spam, self-promotion, or off-topic content

Matija Žiberna
Matija Žiberna
Full-stack developer, co-founder

I'm Matija Žiberna, a self-taught full-stack developer and co-founder passionate about building products, writing clean code, and figuring out how to turn ideas into businesses. I write about web development with Next.js, lessons from entrepreneurship, and the journey of learning by doing. My goal is to provide value through code—whether it's through tools, content, or real-world software.

Table of Contents

  • Payload CMS Concurrency Keys: Preventing Race Conditions Across Multiple Workers
  • Why multiple workers create this problem
  • Enabling concurrency controls
  • How a concurrency key works
  • Defining a concurrency key
  • Exclusive and supersedes
  • Queue-scoped keys
  • What a concurrency key does not do
  • Full example: multi-tenant import with concurrency
  • FAQ
  • Conclusion
On this page:
  • Payload CMS Concurrency Keys: Preventing Race Conditions Across Multiple Workers
  • Why multiple workers create this problem
  • Enabling concurrency controls
  • How a concurrency key works
  • Defining a concurrency key
Build With Matija Logo

Build with Matija

Matija Žiberna

I turn scattered business knowledge into one usable system. End-to-end system architecture, AI integration, and development.

Quick Links

Projects
  • How I Work
  • Blog
  • RSS Feed
  • Services

    • B2B Website Development
    • Bespoke AI Applications
    • Advisory

    Payload

    • B2B Website Development
    • Payload CMS Developer
    • Audit
    • Migration
    • Pricing
    • Payload vs Sanity
    • Payload vs WordPress
    • Payload vs Strapi
    • Payload vs Contentful

    Industries

    • Manufacturing
    • Construction

    Get in Touch

    Have a project in mind? Let's discuss how we can help your business grow.

    Book a discovery callContact me →
    © 2026BuildWithMatija•Principal-led system architecture•All rights reserved