---
title: "Payload Jobs Queue on Vercel: Complete Production Setup"
slug: "payload-jobs-queue-vercel-complete-production-setup-2026"
published: "2026-02-11"
updated: "2026-04-06"
validated: "2026-02-15"
categories:
  - "Payload"
tags:
  - "Payload Jobs Queue"
  - "Vercel Cron"
  - "afterChange hook"
  - "payload hooks"
  - "CRON_SECRET"
  - "task concurrency"
  - "idempotency"
  - "supersedes option"
  - "waitUntil scheduling"
  - "payload-jobs"
  - "job retries"
llm-intent: "reference"
audience-level: "intermediate"
framework-versions:
  - "payload@3.70+"
  - "payload@3.76"
  - "node@20"
  - "vercel@platform"
  - "vercel-cron@stable"
status: "stable"
llm-purpose: "Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…"
llm-prereqs:
  - "Access to Payload CMS"
  - "Access to Vercel"
  - "Access to Vercel Cron"
  - "Access to Node.js"
  - "Access to vercel.json"
llm-outputs:
  - "Completed outcome: Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…"
---

**Summary Triples**
- (afterChange hook, should-enqueue, a job to payload-jobs (fast DB insert) instead of performing long-running work inline)
- (/api/payload-jobs/run, is-used-by, Vercel Cron to execute pending jobs on serverless deployments)
- (CRON_SECRET, must-be-validated, by the /api/payload-jobs/run endpoint to prevent unauthorized job execution)
- (Jobs, are-stored-in, the payload-jobs collection (including status, attempts, logs, scheduled timestamps))
- (Task hardening, uses, retries, backoff, concurrency limits, idempotency, supersedes, and waitUntil scheduling)
- (Vercel Cron, triggers, POST requests to /api/payload-jobs/run on a schedule; attach CRON_SECRET in header or body)
- (supersedes option, prevents, duplicate or obsolete pending jobs by replacing them when a new job supersedes an older one)
- (waitUntil scheduling, allows, delayed execution by setting a future timestamp when creating a job)
- (Concurrency controls (v3.76), reduce, overlap and contention by limiting how many task handlers run simultaneously)
- (Non-blocking hooks (fire-and-forget), are-not-durable, on serverless because work can be interrupted; use Jobs Queue for durability)

### {GOAL}
Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…

### {PREREQS}
- Access to Payload CMS
- Access to Vercel
- Access to Vercel Cron
- Access to Node.js
- Access to vercel.json

### {STEPS}
1. Define a durable Task handler
2. Enqueue jobs from afterChange hooks
3. Add Vercel Cron to run jobs
4. Secure the run endpoint with CRON_SECRET
5. Harden tasks with retries and idempotency
6. Control concurrency and use supersedes
7. Monitor payload-jobs and alerts

<!-- llm:goal="Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…" -->
<!-- llm:prereq="Access to Payload CMS" -->
<!-- llm:prereq="Access to Vercel" -->
<!-- llm:prereq="Access to Vercel Cron" -->
<!-- llm:prereq="Access to Node.js" -->
<!-- llm:prereq="Access to vercel.json" -->
<!-- llm:output="Completed outcome: Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…" -->

# Payload Jobs Queue on Vercel: Complete Production Setup
> Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…
Matija Žiberna · 2026-02-11

If you are doing long-running work inside Payload hooks, you are making every request slower and less reliable. That is true even if you try the “non-blocking hook” pattern, because fire-and-forget is not durability. It is simply “Payload does not wait”, and on serverless it is especially easy for that work to be interrupted.

Payload v3.70+ includes a first-party Jobs Queue with Tasks, Jobs, Queues, and Workflows, plus multiple execution methods including a built-in `/api/payload-jobs/run` endpoint. This article shows the production-grade pattern for Vercel: enqueue jobs from hooks, execute them via Vercel Cron hitting `/api/payload-jobs/run`, secure the endpoint, and harden tasks with retries, concurrency, and observability.

Everything below is based on the current Jobs Queue and Hooks docs you referenced, plus the v3.76 concurrency updates you called out.

---

## The mental model: hooks enqueue, workers execute

Hooks are part of your request lifecycle. They should stay fast and predictable.

Jobs are your durable background work. They live in your database (in the `payload-jobs` collection), can retry, can be scheduled, and have status and logs you can inspect.

So the “correct” architecture is:

1. A request comes in
2. An `afterChange` hook runs
3. The hook queues a job (fast DB insert)
4. A runner executes jobs later (cron-triggered on serverless)

On Vercel, the runner is your cron calling `/api/payload-jobs/run`.

---

## Why “non-blocking hooks” are not enough

Payload supports non-blocking hooks in the sense that if a hook does not return a Promise, Payload will not await it.

That does not give you:

- Durability (work can be lost if the process ends)
- Retries
- Backpressure and concurrency control
- Visibility into failures
- A unified place to inspect “what happened”

If you care about reliability, you want a record of the work to exist even if your server restarts. That is exactly what the Jobs Queue gives you.

---

## The moving pieces in Payload Jobs Queue

Payload’s Jobs Queue is made up of:

- **Tasks**: definitions of background work (slug, handler, retries, schedule, concurrency)
- **Jobs**: individual queued instances of a task or workflow, stored in `payload-jobs`
- **Queues**: named lanes for jobs (default is `default`)
- **Workflows**: multi-step sequences of tasks (optional)

For most apps, you will start with Tasks plus Jobs plus a couple of Queues.

---

## The Vercel production pattern

On Vercel you generally do not have a long-running process, so you do not use `autoRun`. Instead:

- Enqueue jobs from hooks or endpoints using `req.payload.jobs.queue(...)`
- Add a Vercel Cron that calls `/api/payload-jobs/run`
- Secure `/api/payload-jobs/run` using `CRON_SECRET` in `jobs.access.run`

### Step 1: define a task

Example: send a welcome email after a user is created.

Create a task definition (structure may vary slightly depending on how you organize config, but the core idea is consistent):

```ts
// src/tasks/sendWelcomeEmail.ts
import type { TaskConfig } from 'payload'

export const sendWelcomeEmail: TaskConfig = {
  slug: 'sendWelcomeEmail',
  retries: 3,
  handler: async ({ input, req }) => {
    const { userId } = input as { userId: string }

    const user = await req.payload.findByID({
      collection: 'users',
      id: userId,
    })

    // Call your email provider here
    // Keep this handler idempotent if possible
    // Example: only send if user.welcomeEmailSentAt is not set

    await req.payload.update({
      collection: 'users',
      id: userId,
      data: { welcomeEmailSentAt: new Date().toISOString() },
    })

    return { ok: true }
  },
}
````

Notes:

* Keep tasks **idempotent**. Retries mean the handler can run more than once.
* Prefer writing a “sentAt” marker or using an idempotency key with your email provider.

### Step 2: enqueue the task from an `afterChange` hook

In your `users` collection:

```ts
// src/collections/Users.ts
import type { CollectionConfig } from 'payload'

export const Users: CollectionConfig = {
  slug: 'users',
  hooks: {
    afterChange: [
      async ({ doc, operation, req }) => {
        if (operation !== 'create') return

        // Queue job and wait for the DB insert
        // This keeps the request fast but durable
        await req.payload.jobs.queue({
          task: 'sendWelcomeEmail',
          input: { userId: doc.id },
          queue: 'emails',
          // Optional: add a log entry for traceability
          log: [{ message: `Queued welcome email for user ${doc.id}` }],
          req,
        })
      },
    ],
  },
  fields: [
    // ...
  ],
}
```

This is the sweet spot:

* The request waits only long enough to insert a job record
* The expensive work happens later
* You get retries, logs, and status

### Step 3: add a Vercel Cron to run jobs

Create `vercel.json`:

```json
{
  "crons": [
    { "path": "/api/payload-jobs/run?queue=emails&limit=25", "schedule": "*/1 * * * *" },
    { "path": "/api/payload-jobs/run?queue=default&limit=50", "schedule": "*/5 * * * *" }
  ]
}
```

This runs the `emails` queue every minute (small batch) and the default queue every 5 minutes (larger batch).

You can tune:

* `limit` to control runtime per invocation
* schedule frequency to control latency and cost
* separate queues to isolate workloads

### Step 4: secure `/api/payload-jobs/run`

Set a `CRON_SECRET` environment variable in Vercel. Then lock down job running in your Payload config:

```ts
// payload.config.ts
import type { PayloadConfig } from 'payload'

export default {
  // ...other config
  jobs: {
    access: {
      run: ({ req }) => {
        // Allow authenticated admins to manually run jobs if you want
        if (req.user) return true

        const secret = process.env.CRON_SECRET
        if (!secret) return false

        const authHeader = req.headers.get('authorization')
        return authHeader === `Bearer ${secret}`
      },
    },
  },
} satisfies PayloadConfig
```

This gives you:

* Cron can run jobs
* Random internet traffic cannot
* Admins can optionally run jobs manually when logged in (if you keep that clause)

---

## Hardening: retries, idempotency, concurrency, supersedes, waitUntil

### Retries and idempotency

Retries are great, but only if the task can safely re-run.

Practical idempotency strategies:

* Write a “completed marker” to your document (`welcomeEmailSentAt`, `indexedAt`, etc.)
* Include an idempotency key in job input and enforce uniqueness in your domain logic
* Use provider-level idempotency keys where available

### Concurrency control

If multiple jobs target the same resource (for example, re-index a post after each edit), you want to prevent parallel work.

Use a concurrency key that groups jobs by a stable identifier like `collection:docId`.

Conceptual example:

```ts
export const reindexPost: TaskConfig = {
  slug: 'reindexPost',
  retries: 5,
  // concurrency is typically configured so jobs sharing the same key do not run in parallel
  concurrency: ({ input }) => `posts:${(input as any).postId}`,
  handler: async ({ input, req }) => {
    const { postId } = input as { postId: string }
    // do indexing work
    return { postId }
  },
}
```

### Supersedes: “last queued wins”

In v3.76, Payload adds a “supersedes” option for concurrency control. The intent is: if a new job arrives with the same concurrency key, older pending jobs can be removed so only the latest runs.

This is perfect for:

* search indexing
* image reprocessing
* cache rebuilding per document

Use it when doing every intermediate job is wasted work.

### Delayed execution with `waitUntil`

If you need “run this later” without inventing your own scheduler, queue a job with `waitUntil`.

Example use cases:

* “Send follow-up email 2 days after signup”
* “Recheck payment status in 15 minutes”
* “Run cleanup tonight”

Conceptual enqueue:

```ts
await req.payload.jobs.queue({
  task: 'sendFollowUp',
  input: { userId: doc.id },
  waitUntil: new Date(Date.now() + 2 * 24 * 60 * 60 * 1000),
  queue: 'emails',
  req,
})
```

---

## Scheduling recurring work

For “nightly sync” or “every hour cleanup”, use task scheduling (cron expressions) and ensure you have a runner invoking job execution.

On Vercel, the runner is still your Vercel cron calling `/api/payload-jobs/run`. Scheduled tasks create jobs, but something still needs to execute them.

---

## Observability: your dashboard is `payload-jobs`

Because jobs are stored in your database, you can monitor them with normal queries.

Useful fields to pay attention to:

* `hasError`
* `totalTried`
* `processing`
* `completedAt`
* `log`

Simple operational patterns:

* Count pending jobs by queue to detect backlog
* Alert on jobs with `hasError = true`
* Alert on “processing too long” if you see jobs stuck processing

Add structured log messages when queuing and inside tasks so you can trace a job back to:

* the document ID
* the user action that triggered it
* a correlation ID from headers (optional)

---

## Common failure modes and quick fixes

### Jobs are not running at all

* Your Vercel cron is not configured, disabled, or pointing at the wrong path
* The endpoint is blocked by access control because `CRON_SECRET` is missing or mismatched

### You queued jobs but nothing processes them

* Queue name mismatch: you are enqueueing to `emails` but your cron is running `default`
* Your cron calls `/api/payload-jobs/run` without the correct query params

### You used `autoRun` on Vercel

* That is for dedicated servers, not serverless. Use the endpoint method.

### Jobs are “pending” but should be delayed

* Check `waitUntil`. Jobs scheduled into the future will not run until that time.

---

## LLM Response Snippet
```json
{
  "goal": "Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…",
  "responses": [
    {
      "question": "What does the article \"Payload Jobs Queue on Vercel: Complete Production Setup\" cover?",
      "answer": "Payload Jobs Queue on Vercel provides durable, retryable background jobs: enqueue from hooks, run via Vercel Cron, secure with CRON_SECRET, and follow…"
    }
  ]
}
```