- Payload Import Triggers: 4 Essential Ways to Queue Imports
Payload Import Triggers: 4 Essential Ways to Queue Imports
How admin UI buttons, webhooks, scheduled jobs, and scripts all queue the same Payload import job with progress…
You are viewing this article before its public release.
This goes live on March 27, 2026 at 7:00 AM.

Need Help Making the Switch?
Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.
Book Hourly AdvisoryThe previous article covered how to write data into Payload without breaking things — short transactions, payload.db.* direct writes, raw Drizzle for set-based operations. This article covers the other half: how do you actually kick that import off in the first place?
There are four common entry points for a Payload import: a button in the admin UI, a scheduled job that fires automatically, a webhook from an external system, and a standalone script you run directly. Each one fits a different operational context. The pattern that ties them together is the same in all four cases — the trigger itself does not run the import. It queues a job, returns immediately, and lets a worker handle the heavy work separately. That one principle keeps your imports observable, retryable, and decoupled from whatever surface started them.
I ran into this problem while building a recurring product sync for a client. The import logic worked fine in a script during development. The problem was that every stakeholder had a different idea of how it should be triggered: the operations team wanted a button in Payload's admin UI, the product feed vendor wanted to push a webhook, and the original requirement was a nightly scheduled run. Rather than writing three separate import pipelines, the answer was one import job and three entry points that all queue the same task.
The queue is the handoff point
A queue in Payload is a named grouping of jobs that get executed in the order they were added. When you queue a job, Payload stores it in the payload-jobs collection. A worker later picks it up and runs it. The trigger — whether that's an admin button, a webhook, or a cron — only creates the job entry and returns. The actual import runs on the worker's schedule.
That separation gives you three things for long imports. First, the trigger surface gets an immediate response — the admin UI doesn't freeze, the webhook returns a 200, the script exits. Second, if the import fails, the job record stays in the database and can be retried. Third, you can add a progress tracking collection alongside the job so you have visibility into where a long import is and what failed.
The job queue structure for an import looks like this:
trigger (admin / webhook / schedule / script)
→ payload.jobs.queue({ task: 'runImport', queue: 'imports' })
→ job stored in payload-jobs
→ worker picks it up
→ import runs in batched transactions
Every trigger in this article ends at the same second step.
Tracking progress with an import-runs collection
Before getting into the triggers, there is one collection worth defining upfront. Payload's built-in job fields track execution state — queued, processing, succeeded, failed — but they do not give you row-level progress, chunk counts, or dead-letter captures. For any serious import you want your own tracking collection.
// File: src/collections/ImportRuns.ts
import { CollectionConfig } from 'payload'
export const ImportRuns: CollectionConfig = {
slug: 'import-runs',
fields: [
{ name: 'status', type: 'select', options: ['queued', 'running', 'completed', 'failed'] },
{ name: 'source', type: 'text' },
{ name: 'processedCount', type: 'number', defaultValue: 0 },
{ name: 'failedCount', type: 'number', defaultValue: 0 },
{ name: 'lastCursor', type: 'text' },
{ name: 'errorLog', type: 'textarea' },
],
}
When a trigger fires, it creates an import-runs document first, then passes the document ID into the queued job. The job handler updates the record as it processes chunks. This gives you a real-time view of import progress that does not depend on polling Payload's job internals.
Trigger 1: Admin UI button
This is the right trigger when a human initiates the import manually. A custom component in Payload's admin UI calls a secured endpoint, which creates the tracking record and queues the job.
Start with the endpoint:
// File: src/endpoints/start-import.ts
import { PayloadHandler } from 'payload'
export const startImportHandler: PayloadHandler = async (req) => {
const user = req.user
if (!user || user.role !== 'admin') {
return Response.json({ error: 'Unauthorized' }, { status: 401 })
}
const importRun = await req.payload.create({
collection: 'import-runs',
data: {
status: 'queued',
source: 'admin-ui',
processedCount: 0,
failedCount: 0,
},
req,
})
await req.payload.jobs.queue({
task: 'runImport',
input: { importRunId: importRun.id },
queue: 'imports',
req,
})
return Response.json({ ok: true, importRunId: importRun.id })
}
Then the admin component that calls it:
// File: src/components/TriggerImportButton.tsx
'use client'
import { useState } from 'react'
import { useConfig } from '@payloadcms/ui'
export function TriggerImportButton() {
const [status, setStatus] = useState<'idle' | 'queuing' | 'queued'>('idle')
const { serverURL } = useConfig()
const handleClick = async () => {
setStatus('queuing')
const res = await fetch(`${serverURL}/api/start-import`, { method: 'POST' })
const data = await res.json()
if (data.ok) {
setStatus('queued')
}
}
return (
<button onClick={handleClick} disabled={status !== 'idle'}>
{status === 'idle' && 'Start Import'}
{status === 'queuing' && 'Queuing...'}
{status === 'queued' && 'Import queued'}
</button>
)
}
Register the endpoint in your Payload config under endpoints and the component in admin.components. The button gives immediate feedback because the endpoint only creates records and queues a job — it does not block on the import itself.
Trigger 2: Scheduled recurring import
This is the right trigger when the import runs on a fixed cadence — nightly sync, hourly feed refresh, whatever the external system requires. Payload's jobs system supports scheduled tasks natively.
Define the schedule in your Payload config:
// File: payload.config.ts (jobs section)
jobs: {
tasks: [runImportTask],
schedules: [
{
task: 'runImport',
cron: '0 2 * * *', // every night at 2am
queue: 'imports',
input: { source: 'scheduled' },
},
],
}
The cron field uses standard cron syntax. Payload queues the job automatically at each scheduled time. The job then runs on the same imports queue as every other trigger — the worker does not need to know or care that this particular run was scheduled rather than admin-initiated.
For the worker itself, you have two options. autoRun runs inside the Next.js process, which is simpler to set up. On a dedicated server, the separate runner command is cleaner and easier to scale independently:
payload jobs:run --cron "* * * * *" --queue imports
This polls the queue every minute and executes any pending jobs. Running it as a separate process keeps import work off the web server's event loop.
Trigger 3: Webhook from an external system
This is the right trigger when an external service pushes data or signals that a sync is needed. The webhook endpoint validates the incoming request, creates the tracking record, queues the job, and returns a 200 immediately. The external system does not wait for the import to finish.
// File: src/endpoints/webhook-import.ts
import { PayloadHandler } from 'payload'
import crypto from 'crypto'
const WEBHOOK_SECRET = process.env.IMPORT_WEBHOOK_SECRET!
export const webhookImportHandler: PayloadHandler = async (req) => {
const signature = req.headers.get('x-webhook-signature')
const body = await req.text()
const expected = crypto
.createHmac('sha256', WEBHOOK_SECRET)
.update(body)
.digest('hex')
if (signature !== `sha256=${expected}`) {
return Response.json({ error: 'Invalid signature' }, { status: 401 })
}
const payload = JSON.parse(body)
const importRun = await req.payload.create({
collection: 'import-runs',
data: {
status: 'queued',
source: payload.source ?? 'webhook',
processedCount: 0,
failedCount: 0,
},
req,
})
await req.payload.jobs.queue({
task: 'runImport',
input: { importRunId: importRun.id },
queue: 'imports',
req,
})
return Response.json({ ok: true, importRunId: importRun.id })
}
The HMAC signature check here is the minimal webhook security pattern — the external system signs the request body with a shared secret and you verify it on arrival. Adapt the signing approach to match whatever the external system actually sends. The important thing is that validation happens before any database writes, and the response goes out immediately after queuing.
Trigger 4: Standalone script
This is the right trigger for one-off imports, backfills, ops-run maintenance jobs, or CI pipeline steps. Payload supports running scripts directly with payload run, which initializes the full Payload config outside the Next.js process.
// File: src/scripts/queue-import.ts
import { getPayload } from 'payload'
import config from '@payload-config'
async function main() {
const payload = await getPayload({ config })
const importRun = await payload.create({
collection: 'import-runs',
data: {
status: 'queued',
source: 'script',
processedCount: 0,
failedCount: 0,
},
})
await payload.jobs.queue({
task: 'runImport',
input: { importRunId: importRun.id },
queue: 'imports',
})
console.log(`Queued import run ${importRun.id}`)
process.exit(0)
}
main().catch((err) => {
console.error(err)
process.exit(1)
})
Run it with:
npx payload run src/scripts/queue-import.ts
For a one-off import where you want to run the logic directly without queuing — a local migration, a development seed — you can call the shared import function directly instead of going through the job system. The queue is valuable for production workloads where retries and observability matter. For a script you are running once and watching in a terminal, calling the function directly is fine.
The shared task handler
All four triggers queue the same task. The handler lives in one place, not four.
// File: src/jobs/tasks/run-import.ts
import { TaskConfig } from 'payload'
export const runImportTask: TaskConfig<'runImport'> = {
slug: 'runImport',
inputSchema: [
{ name: 'importRunId', type: 'text', required: true },
],
handler: async ({ req, input }) => {
const { importRunId } = input
await req.payload.update({
collection: 'import-runs',
id: importRunId,
data: { status: 'running' },
req,
})
try {
// fetch source data
// for each batch:
// begin transaction
// write with payload.db.* or Drizzle
// commit
// update import-runs processedCount
await req.payload.update({
collection: 'import-runs',
id: importRunId,
data: { status: 'completed' },
req,
})
} catch (err) {
await req.payload.update({
collection: 'import-runs',
id: importRunId,
data: {
status: 'failed',
errorLog: err instanceof Error ? err.message : String(err),
},
req,
})
throw err
}
},
}
The handler updates import-runs at the start, after each committed batch, and at completion or failure. Payload's job system handles retries at the job level — if the handler throws, the job is retried according to your retry config. The import-runs record gives you the human-readable view of what happened.
Trigger comparison
| Trigger | When to use | Response model |
|---|---|---|
| Admin UI button | Human-initiated, manual imports | Immediate UI feedback, job queued |
| Scheduled job | Periodic sync, recurring feeds | Auto-queued by Payload on cron |
| Webhook | External system pushes data or signals | 200 returned immediately, job queued |
| Standalone script | One-off imports, backfills, CI | Exits after queuing (or runs directly) |
FAQ
Do all four triggers need to go through the queue, or can they call the import function directly?
For production workloads the queue is the right model — you get retries, durability, and progress tracking. For a one-off local script you are supervising directly, calling the import function without queuing is fine. The queue adds the most value when a failure needs to be recoverable without you being present.
How do I prevent two imports from running at the same time?
Payload Jobs supports concurrency keys. Set a concurrencyKey on the task and Payload will not run two instances of the same key simultaneously. For imports you typically want concurrencyKey: 'import-run' so a second admin click or webhook does not start a second parallel import while one is already running.
Can the admin UI show live progress during the import?
Not natively. Payload's job fields have status and log fields, but not a streaming progress model. The practical approach is to poll your import-runs collection from the admin component and display processedCount and failedCount as the job updates them. A simple interval poll every 2–3 seconds works well for this.
What happens if the worker goes down mid-import?
The job status in payload-jobs stays as processing. When the worker restarts, Payload's retry logic can pick it back up depending on your retry configuration. For imports that use a cursor or chunk offset stored in import-runs, the handler can resume from lastCursor rather than starting over. That is the main reason to store progress in your own collection rather than relying entirely on Payload's job state.
Should the standalone script queue a job or run the import directly?
Either works depending on context. Queuing is better when you want the same retry and observability behavior as the other triggers, or when you want to queue work and let the normal worker process it. Running directly is better for local development imports or one-time migrations where you want to watch the output inline and do not need the job infrastructure.
Conclusion
A Payload import job only needs to be written once. The four triggers — admin UI button, scheduled cron, webhook, and standalone script — are all entry points that queue the same task and return immediately. The import itself runs on a dedicated queue, tracked in your own import-runs collection, with the batch transaction logic from the previous article handling the actual database writes.
That structure keeps your import pipeline observable from day one and easy to extend when requirements change — a new trigger is a new endpoint or schedule entry, not a new import engine.
Let me know in the comments if you have questions, and subscribe for more practical development guides.
Thanks, Matija
📚 Comprehensive Payload CMS Guides
Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.