---
title: "Payload CMS Concurrency Keys: Prevent Race Conditions"
slug: "payload-cms-concurrency-keys"
published: "2026-04-17"
updated: "2026-04-06"
categories:
  - "Payload"
tags:
  - "Payload CMS concurrency keys"
  - "concurrency control"
  - "concurrency key"
  - "exclusive mode"
  - "supersedes mode"
  - "Payload jobs"
  - "job serialization"
  - "queue-scoped keys"
  - "multi-tenant import"
  - "jobs migration"
llm-intent: "reference"
audience-level: "intermediate"
framework-versions:
  - "payload cms"
  - "typescript"
  - "postgresql"
  - "drizzle"
  - "node.js"
status: "stable"
llm-purpose: "Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…"
llm-prereqs:
  - "Access to Payload CMS"
  - "Access to TypeScript"
  - "Access to PostgreSQL"
  - "Access to Drizzle"
  - "Access to Node.js"
llm-outputs:
  - "Completed outcome: Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…"
---

**Summary Triples**
- (Concurrency key, serializes, jobs that share the same key to prevent parallel execution on the same resource across multiple workers)
- (Enabling concurrency keys, requires, flipping the Payload Jobs concurrency config flag and running a migration to update the job storage schema)
- (exclusive mode, behavior, ensures only one job with the same concurrency key runs at a time; other jobs with that key wait/queue until the running job finishes)
- (supersedes mode, behavior, ensures newer jobs with the same key replace older queued jobs so only the latest queued job will run)
- (Multi-tenant scoping, recommendation, include the tenant identifier in the concurrency key (or use queue-scoped keys) so jobs serialize per-tenant but remain parallel across tenants)
- (Common failure mode before keys, resulted in, duplicate writes and race conditions when multiple workers processed resource-related jobs concurrently)
- (Testing and rollout, required steps, stop workers, apply migration, enable concurrency config, restart workers, and validate with multi-worker staging tests)

### {GOAL}
Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…

### {PREREQS}
- Access to Payload CMS
- Access to TypeScript
- Access to PostgreSQL
- Access to Drizzle
- Access to Node.js

### {STEPS}
1. Enable concurrency control flag
2. Run database migration
3. Define a concurrency key function
4. Choose exclusive or supersedes
5. Scope keys by queue when needed
6. Test with multiple workers
7. Deploy and monitor

<!-- llm:goal="Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…" -->
<!-- llm:prereq="Access to Payload CMS" -->
<!-- llm:prereq="Access to TypeScript" -->
<!-- llm:prereq="Access to PostgreSQL" -->
<!-- llm:prereq="Access to Drizzle" -->
<!-- llm:prereq="Access to Node.js" -->
<!-- llm:output="Completed outcome: Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…" -->

# Payload CMS Concurrency Keys: Prevent Race Conditions
> Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…
Matija Žiberna · 2026-04-17

# Payload CMS Concurrency Keys: Preventing Race Conditions Across Multiple Workers

When you run multiple Payload workers polling the same queue, they can pick up and execute different jobs in parallel — that is the whole point of multiple workers. The problem appears when two workers pick up two jobs that both touch the same resource at the same time. A concurrency key is the mechanism Payload gives you to prevent that. It serializes jobs that share a key while letting jobs for different resources keep running in parallel. This article covers how concurrency keys work, when to use them, and the two modes — `exclusive` and `supersedes` — that control what happens to jobs waiting in line.

In [Payload CMS Jobs: Separate Web & Worker Roles for Safe Scale](/blog/payload-cms-jobs-separate-web-worker-roles-safe-scale), I covered why separating your web and worker runtimes is the right architecture for handling background work at scale. That article introduced concurrency keys briefly — what they are and why they matter with multiple workers. This article goes into the implementation: the config flag you need to enable, the migration it requires, how `exclusive` and `supersedes` modes behave, and how to scope keys correctly in a multi-tenant setup.

I built a multi-tenant import pipeline where each tenant had a dedicated import queue and a shared worker pool. Everything worked in development with a single worker. In staging with three workers, imports for the same tenant were occasionally running in parallel, and rows were being written twice. The fix was a concurrency key scoped to the tenant ID, which forced same-tenant jobs to serialize while leaving cross-tenant jobs fully parallel.

## Why multiple workers create this problem

A single worker processes jobs one at a time by definition. Race conditions cannot happen because there is no parallelism. When you add a second worker, both workers poll the queue independently. Payload does not automatically know that job A and job B are related — it just sees two pending jobs and assigns each to an available worker.

If both jobs touch the same document, the same tenant's database rows, or the same external API account, they are now running in parallel against shared state. Depending on what they do, you get silent data corruption, duplicate writes, or conflicting updates where the last writer wins and earlier work is lost.

Concurrency keys solve this at the job coordination layer, before execution starts.

## Enabling concurrency controls

Concurrency control is opt-in and requires a config flag and a database migration:

```typescript
// File: payload.config.ts
import { buildConfig } from 'payload'

export default buildConfig({
  jobs: {
    enableConcurrencyControl: true,
    tasks: [...],
  },
})
```

Setting `enableConcurrencyControl: true` adds an indexed `concurrencyKey` field to the `payload-jobs` collection schema. If you have an existing jobs collection, Payload will require a migration to add this field. Run your migrations before deploying workers that rely on concurrency keys — a worker running without the field indexed will not enforce concurrency correctly.

## How a concurrency key works

A concurrency key is a string you derive from a job's input. Jobs that produce the same key are guaranteed to run one at a time. Jobs that produce different keys are not affected by each other and can still run in parallel.

When you queue a job, Payload computes the key from the job's input and stores it on the job document. When a worker polls for pending jobs, Payload excludes any job whose key is already being processed by another worker. If the same batch of pending jobs contains multiple jobs with identical keys — which can happen when the queue is backed up — only the first one by creation order runs. The rest are released back to `processing: false` and picked up on the next poll.

The result is that same-key jobs are serialized across the entire worker pool, regardless of how many workers are running.

## Defining a concurrency key

The `concurrency` option goes on your task or workflow definition, not on the individual `queue` call:

```typescript
// File: src/jobs/tasks/run-import.ts
import { TaskConfig } from 'payload'

export const runImportTask: TaskConfig<'runImport'> = {
  slug: 'runImport',
  inputSchema: [
    { name: 'tenantId', type: 'text', required: true },
    { name: 'importRunId', type: 'text', required: true },
  ],
  concurrency: ({ input }) => `import:${input.tenantId}`,
  handler: async ({ req, input }) => {
    // This runs exclusively per tenantId.
    // Two workers will never process runImport jobs
    // for the same tenant at the same time.
  },
}
```

The function receives the job's input and returns a string. The string is the key. For a multi-tenant import, `import:${input.tenantId}` means all import jobs for tenant A serialize, all import jobs for tenant B serialize, and tenant A and tenant B imports can run in parallel across different workers.

For more granular control — one import at a time per tenant per external source — extend the key:

```typescript
concurrency: ({ input }) => `import:${input.tenantId}:${input.sourceId}`,
```

## Exclusive and supersedes

The shorthand function syntax sets `exclusive: true` and `supersedes: false` by default. The full configuration object exposes both options:

```typescript
concurrency: {
  key: ({ input }) => `import:${input.tenantId}`,
  exclusive: true,
  supersedes: false,
}
```

**`exclusive: true`** means jobs with the same key run one at a time. All jobs are preserved and will eventually execute — they just wait their turn. This is the right mode for imports, syncs, or any operation where every job represents real work that must complete.

**`supersedes: true`** means that when a new job is queued with the same key, Payload deletes any older pending (not yet running) jobs with that key. Only the newest job runs. Jobs that are already running complete normally.

```typescript
concurrency: {
  key: ({ input }) => `generate-embeddings:${input.documentId}`,
  exclusive: true,
  supersedes: true,
}
```

The supersedes pattern fits regeneration jobs — embeddings, thumbnails, search index updates — where intermediate states are irrelevant and only the latest version of the document needs to be processed. If a document is edited five times in quick succession and five regeneration jobs are queued, you only need the last one to run. The four intermediate jobs represent work that would be immediately overwritten anyway.

Here is what the two modes look like in practice:

| Mode | All jobs run | Parallel blocked | Use case |
|---|---|---|---|
| `exclusive: true, supersedes: false` | Yes | Yes | Imports, syncs, ordered processing |
| `exclusive: true, supersedes: true` | No — older pending jobs deleted | Yes | Regeneration, refresh, last-state-wins |

## Queue-scoped keys

By default, concurrency is global across all queues. A job with key `import:tenant-123` in the `imports` queue blocks a job with the same key in the `default` queue. If you want concurrency to be queue-specific — so the same tenant can have one import per queue without them blocking each other — include the queue name in the key:

```typescript
concurrency: {
  key: ({ input, queue }) => `${queue}:import:${input.tenantId}`,
}
```

The `queue` argument is passed into the key function alongside `input`. Including it scopes the key to the queue, so a job in `imports` and a job in `priority-imports` for the same tenant no longer block each other.

## What a concurrency key does not do

A concurrency key serializes jobs that share a key. It does not limit total throughput or cap how many jobs run across the system. If you have 1,000 tenants and 1,000 workers, all 1,000 imports can run simultaneously — one per tenant, which is exactly right. The key scopes the restriction to the resource dimension you care about.

If you need a global throughput limit — for example, to protect a database that cannot handle more than five concurrent writes regardless of tenant — combine concurrency keys with worker count limits and queue design. A concurrency key alone is not a global rate limiter.

Also note that supersedes only removes pending jobs, never running ones. If a job is mid-execution when a new one is queued with the same key and `supersedes: true`, the running job completes normally and the new one waits for it.

## Full example: multi-tenant import with concurrency

```typescript
// File: src/jobs/tasks/run-import.ts
import { TaskConfig } from 'payload'

export const runImportTask: TaskConfig<'runImport'> = {
  slug: 'runImport',
  inputSchema: [
    { name: 'tenantId', type: 'text', required: true },
    { name: 'importRunId', type: 'text', required: true },
  ],
  concurrency: {
    key: ({ input }) => `import:${input.tenantId}`,
    exclusive: true,
    supersedes: false, // every import run must complete
  },
  handler: async ({ req, input }) => {
    const { tenantId, importRunId } = input

    await req.payload.update({
      collection: 'import-runs',
      id: importRunId,
      data: { status: 'running' },
      req,
    })

    try {
      // batched transactional import logic here
      // (see the previous article on payload.db.* and Drizzle)

      await req.payload.update({
        collection: 'import-runs',
        id: importRunId,
        data: { status: 'completed' },
        req,
      })
    } catch (err) {
      await req.payload.update({
        collection: 'import-runs',
        id: importRunId,
        data: {
          status: 'failed',
          errorLog: err instanceof Error ? err.message : String(err),
        },
        req,
      })

      throw err
    }
  },
}
```

With this configuration, five workers running simultaneously will each pick up import jobs for different tenants. If two jobs for the same tenant are in the queue, one runs and the other waits — regardless of which worker picked them up.

## FAQ

**Do I need a migration to enable concurrency keys on an existing project?**

Yes. Setting `enableConcurrencyControl: true` adds a `concurrencyKey` field to the jobs collection schema. If you have an existing `payload-jobs` table in PostgreSQL, Payload will require a migration to add and index this column. Run the migration before deploying workers that depend on concurrency enforcement.

**Can I set different concurrency configs on different tasks in the same workflow?**

Concurrency is set at the task or workflow level, not on individual inline tasks within a workflow. For fine-grained control inside a workflow, split the logic into separate tasks with their own concurrency keys and chain them.

**Does supersedes delete jobs that are currently running?**

No. Only pending jobs (not yet running) are deleted when a newer job with the same key is queued and `supersedes: true`. A running job always completes normally. The new job then waits for the running job to finish before starting.

**What happens if the key function throws an error?**

If the key function throws, the job will not be queued with a concurrency key. Depending on Payload's error handling at queue time, this may surface as a queuing error or silently fall back to no concurrency enforcement. Keep key functions simple — string interpolation from validated input fields rather than logic that can fail.

**How do concurrency keys interact with the retry system?**

When a job fails and is retried, it is re-queued with the same concurrency key. The retry job goes through the same concurrency enforcement as any new job. If another job with the same key is already running, the retry waits its turn.

## Conclusion

Concurrency keys are the right tool when multiple workers can pick up jobs that touch the same resource and parallel execution of those jobs would cause race conditions, duplicate writes, or conflicting updates. The `exclusive` mode guarantees sequential execution for all jobs sharing a key. The `supersedes` mode additionally clears the backlog and runs only the newest job — useful when intermediate states are irrelevant and only the latest matters. Including the queue name in the key makes concurrency queue-scoped rather than global.

For multi-worker Payload deployments, concurrency keys are the difference between "imports occasionally corrupt each other" and "same-resource jobs always serialize, everything else runs in parallel."

Let me know in the comments if you have questions, and subscribe for more practical development guides.

Thanks,
Matija

## LLM Response Snippet
```json
{
  "goal": "Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…",
  "responses": [
    {
      "question": "What does the article \"Payload CMS Concurrency Keys: Prevent Race Conditions\" cover?",
      "answer": "Payload CMS concurrency keys serialize same-resource jobs across workers. Get setup steps, migration notes, and when to use exclusive vs supersedes. Read…"
    }
  ]
}
```