- How to Build a CSV Product Import System with Payload Queues
How to Build a CSV Product Import System with Payload Queues
Create a reliable bulk import system using Payload's built-in job queues for products and variants

I was working on an e-commerce project last month when my client handed me a spreadsheet with 200 products, each having multiple color and size variants. "Can you just upload this?" they asked casually. What seemed like a simple request turned into a nightmare of manual data entry - until I discovered Payload's queue system.
After spending a weekend building a proper CSV import system, I can now handle hundreds of products with variants in minutes instead of days. The key insight was using Payload's built-in job queues, which provide reliability, progress tracking, and error handling without any external dependencies.
This guide walks you through building the complete system I wish I had from the start. By the end, you'll have a robust CSV import solution that handles products with variants, manages duplicates intelligently, and provides a clean admin interface for non-technical users.
The Problem We're Solving
Most Payload CMS projects eventually face the same challenge: bulk data import. You've built your collections, configured your fields, and launched your application. Then reality hits - you need to populate it with real data, and lots of it.
Manual entry through the admin interface works for a handful of records, but becomes impractical at scale. For our e-commerce example, imagine creating 50 products manually, then adding 5-10 variants for each product. That's potentially 500+ individual records to create through the UI.
The alternative approaches all have significant drawbacks. Direct database scripts bypass Payload's validation and hooks. API endpoints require authentication setup and error handling. Third-party ETL tools add complexity and dependencies.
Payload's queue system solves this elegantly. It provides built-in retry mechanisms, comprehensive logging, progress tracking, and admin UI integration. Most importantly, it uses Payload's Local API, ensuring all your validation rules and hooks execute properly.
What We're Building
Our CSV import system will handle a realistic e-commerce scenario: products with multiple variants. Here's the data structure we'll work with:
name,variantColors,variantSizes,category,sourceUrl "Classic Terra Stone","Grey,Red,Black","10x10,20x10,20x20","Paving","https://example.com/terra" "Modern Brick","White,Brown","15x15,30x30","Tiles","https://example.com/brick"
Each row represents a base product with comma-separated variant options. Our system will automatically generate all possible combinations - so the first row creates one product with 9 variants (3 colors × 3 sizes).
The complete system includes five key components:
- Queue Handler: Processes CSV files and creates database records
- Database Migration: Updates PostgreSQL enums for the new job type
- Server Action: Bridges the admin UI with the queue system
- Admin Component: Provides a user-friendly import interface
- Collection Integration: Embeds the import controls in Payload admin
Let's build each piece step by step.
Step 1: Create the Queue Handler
The queue handler is the core of our import system. It reads CSV files, validates data, and creates records through Payload's API.
// File: src/queues/handlers/importProductsHandler.ts
import fs from 'fs'
import path from 'path'
import { parse } from 'csv-parse/sync'
interface CSVRow {
name: string
variantColors: string
variantSizes: string
category: string
sourceUrl: string
}
interface ImportStats {
productsCreated: number
variantsCreated: number
errors: string[]
totalProcessed: number
}
// Generate SKU from product name and variant details
function generateSku(productName: string, color?: string, size?: string): string {
const normalize = (text: string) => text
.toUpperCase()
.replace(/[^A-Z0-9]/g, '_')
.replace(/_+/g, '_')
.replace(/^_|_$/g, '')
const parts = [normalize(productName)]
if (color) parts.push(normalize(color))
if (size) parts.push(normalize(size))
return parts.join('-')
}
// Parse comma-separated values and clean them
function parseVariants(variantString: string): string[] {
if (!variantString?.trim()) return []
return variantString
.split(',')
.map(v => v.trim())
.filter(v => v.length > 0)
}
const importProductsHandler = async ({ req }: {
input: { importAll?: boolean },
req: any
}) => {
req.payload.logger.info('=== PRODUCT IMPORT JOB STARTED ===')
const stats: ImportStats = {
productsCreated: 0,
variantsCreated: 0,
errors: [],
totalProcessed: 0
}
try {
// Read and parse CSV file
const csvPath = path.resolve(process.cwd(), 'data/products.csv')
req.payload.logger.info(`Reading CSV from: ${csvPath}`)
if (!fs.existsSync(csvPath)) {
throw new Error(`CSV file not found at: ${csvPath}`)
}
const csvContent = fs.readFileSync(csvPath, 'utf-8')
const rows: CSVRow[] = parse(csvContent, {
columns: true,
skip_empty_lines: true,
})
req.payload.logger.info(`Parsed ${rows.length} rows from CSV`)
stats.totalProcessed = rows.length
// Process each product
for (let i = 0; i < rows.length; i++) {
const row = rows[i]
req.payload.logger.info(`Processing product ${i + 1}/${rows.length}: ${row.name}`)
try {
// Check if product already exists (deduplication)
const existingProducts = await req.payload.find({
collection: 'products',
where: {
title: { equals: row.name }
}
})
if (existingProducts.docs.length > 0) {
req.payload.logger.info(`Product "${row.name}" already exists, skipping`)
continue
}
// Parse variants
const colors = parseVariants(row.variantColors)
const sizes = parseVariants(row.variantSizes)
req.payload.logger.info(`Colors: ${colors.length}, Sizes: ${sizes.length}`)
const hasVariants = colors.length > 0 && sizes.length > 0
const baseSku = generateSku(row.name)
// Create the base product
const productData = {
title: row.name,
sku: baseSku,
category: row.category,
sourceUrl: row.sourceUrl,
hasVariants: hasVariants,
inStock: true,
shortDescription: `${row.name} - imported from CSV`
}
const createdProduct = await req.payload.create({
collection: 'products',
data: productData
})
stats.productsCreated++
req.payload.logger.info(`Created product: ${createdProduct.id} - ${row.name}`)
// Create variants if both colors and sizes exist
if (hasVariants) {
for (const color of colors) {
for (const size of sizes) {
try {
const variantSku = generateSku(row.name, color, size)
const displayName = `${row.name} - ${color} - ${size}`
// Check for duplicate variants
const existingVariants = await req.payload.find({
collection: 'product-variants',
where: {
variantSku: { equals: variantSku }
}
})
if (existingVariants.docs.length > 0) {
req.payload.logger.info(`Variant "${variantSku}" already exists, skipping`)
continue
}
const variantData = {
product: createdProduct.id,
displayName: displayName,
color: color,
size: size,
variantSku: variantSku,
inStock: true
}
const createdVariant = await req.payload.create({
collection: 'product-variants',
data: variantData
})
stats.variantsCreated++
req.payload.logger.info(`Created variant: ${createdVariant.id} - ${variantSku}`)
} catch (variantError) {
const errorMsg = `Failed to create variant ${color}/${size} for ${row.name}: ${(variantError as Error).message}`
stats.errors.push(errorMsg)
req.payload.logger.error(errorMsg)
}
}
}
}
} catch (productError) {
const errorMsg = `Failed to process product "${row.name}": ${(productError as Error).message}`
stats.errors.push(errorMsg)
req.payload.logger.error(errorMsg)
}
}
req.payload.logger.info('=== PRODUCT IMPORT JOB COMPLETED ===')
req.payload.logger.info(`Products created: ${stats.productsCreated}`)
req.payload.logger.info(`Variants created: ${stats.variantsCreated}`)
req.payload.logger.info(`Errors: ${stats.errors.length}`)
return {
output: {
success: true,
message: `Import completed successfully`,
stats: stats
}
}
} catch (error) {
req.payload.logger.error('=== PRODUCT IMPORT JOB ERROR ===')
req.payload.logger.error(`Error: ${(error as Error).message}`)
return {
output: {
success: false,
message: `Import failed: ${(error as Error).message}`,
stats: stats
}
}
}
}
export default importProductsHandler
This handler demonstrates several important concepts. The CSV parsing uses a well-established library that handles edge cases like quoted fields containing commas. The SKU generation function normalizes text consistently, ensuring we create valid database identifiers from any product name or variant combination.
The deduplication logic checks for existing products by name and variants by SKU. This allows safe re-runs of the import process - if you need to add new products to an existing CSV, the system won't create duplicates.
Error isolation is crucial here. Each product and variant creation is wrapped in individual try-catch blocks. If one product fails validation, the import continues processing the remaining rows. All errors are collected and reported in the final summary.
Step 2: Register the Queue Task
Payload needs to know about our new job type before it can queue and execute it. This happens in the main configuration file.
// File: payload.config.ts
import importProductsHandler from '@/queues/handlers/importProductsHandler'
export default buildConfig({
// ... other config
jobs: {
tasks: [
// ... existing tasks
{
slug: 'importProducts',
label: 'Import Products from CSV',
inputSchema: [
{
name: 'importAll',
type: 'checkbox',
required: false,
},
],
handler: importProductsHandler
}
]
},
// ... rest of config
})
The configuration is straightforward, but the slug value is critical. Payload uses this internally to identify job types, and PostgreSQL stores it in an enum column. The inputSchema defines what parameters the job accepts, though for our use case we're keeping it simple.
After adding this configuration, you'll need a database migration to update the PostgreSQL enum that tracks job types. Without this step, you'll get enum constraint violations when trying to queue jobs.
Step 3: Create Database Migration
PostgreSQL uses enum types to constrain the values allowed in certain columns. When we add a new job task, we need to update these enums.
pnpm payload migrate:create
This generates a migration file that looks like this:
// File: src/migrations/20240815_143022.ts
import { MigrateUpArgs, MigrateDownArgs, sql } from '@payloadcms/db-postgres'
export async function up({ db, payload, req }: MigrateUpArgs): Promise<void> {
await db.execute(sql`
ALTER TYPE "public"."enum_payload_jobs_log_task_slug" ADD VALUE 'importProducts';
ALTER TYPE "public"."enum_payload_jobs_task_slug" ADD VALUE 'importProducts';`)
}
export async function down({ db, payload, req }: MigrateDownArgs): Promise<void> {
// Note: PostgreSQL doesn't support removing enum values easily
// In practice, you'd need to recreate the enum type entirely
}
Run the migration to update your database:
pnpm payload migrate
This step is essential and often overlooked. Payload uses these enum types to maintain data integrity in the jobs system. Skipping the migration results in cryptic database errors that can be frustrating to debug.
Step 4: Create Server Actions
Server Actions provide the bridge between our admin UI and the queue system. They handle job creation and provide user feedback.
// File: src/actions/admin/product-import-actions.ts
'use server'
import { getPayload } from 'payload'
import config from '@payload-config'
export interface ProductImportResult {
message: string;
status: 'success' | 'error' | 'pending';
}
export async function triggerProductImport(
prevState: ProductImportResult,
formData: FormData
): Promise<ProductImportResult> {
try {
const payload = await getPayload({ config })
// Queue the import job
const job = await payload.jobs.queue({
task: 'importProducts',
input: { importAll: true },
queue: 'default',
})
// Execute the job immediately using Local API
const results = await payload.jobs.runByID({
id: job.id,
})
if (results && results.length > 0) {
const result = results[0]
if (result.output?.success) {
const stats = result.output.stats
return {
message: `Import completed! Created ${stats.productsCreated} products and ${stats.variantsCreated} variants. ${stats.errors.length > 0 ? `${stats.errors.length} errors occurred.` : ''}`,
status: 'success'
}
} else {
return {
message: result.output?.message || 'Import failed with unknown error',
status: 'error'
}
}
}
return {
message: `Import job queued successfully (ID: ${job.id})`,
status: 'success'
}
} catch (error: any) {
console.error('Product import error:', error)
return {
message: error.message || 'Failed to trigger import',
status: 'error'
}
}
}
This Server Action follows the Next.js 13+ pattern with the 'use server'
directive. The key insight here is that we both queue the job and execute it immediately. This might seem redundant, but it provides several benefits.
Queueing the job first creates a record in Payload's job management system, giving us tracking and logging capabilities. Running it immediately provides instant feedback to the user. If we only queued it, users would have to wait for a background processor to pick it up.
The error handling extracts meaningful information from the job results and formats it for user display. This transforms technical errors into actionable feedback that non-technical users can understand.
Step 5: Build the Admin UI Component
The admin component provides a clean interface for triggering imports directly from the Payload admin panel.
// File: src/components/admin/ProductImportTrigger.tsx
'use client'
import { Button } from '@payloadcms/ui'
import React, { useActionState } from 'react'
import { triggerProductImport, ProductImportResult } from '@/actions/admin/product-import-actions'
const initialState: ProductImportResult = { message: '', status: 'pending' }
function ProductImportTrigger() {
const [importState, importAction, isPending] = useActionState(triggerProductImport, initialState)
return (
<div className="p-4 border rounded-lg bg-gray-50 mb-6">
<h3 className="text-lg font-semibold mb-2">Bulk Product Import</h3>
<p className="text-sm text-gray-600 mb-4">
Import products and variants from CSV file at <code className="bg-gray-200 px-1 py-0.5 rounded">/data/products.csv</code>
</p>
<form action={importAction} className="flex gap-3 items-center mb-4">
<Button type="submit" disabled={isPending}>
{isPending ? 'Importing...' : 'Start Import'}
</Button>
{isPending && (
<span className="text-sm text-gray-500">Processing CSV file...</span>
)}
</form>
{importState.message && (
<div className={`p-3 rounded-md text-sm ${
importState.status === 'success'
? 'bg-green-50 text-green-800 border border-green-200'
: 'bg-red-50 text-red-800 border border-red-200'
}`}>
<strong>{importState.status === 'success' ? 'Success:' : 'Error:'}</strong> {importState.message}
</div>
)}
<div className="mt-4 pt-3 border-t border-gray-200 text-xs text-gray-500">
<p><strong>CSV Format:</strong> name, variantColors, variantSizes, category, sourceUrl</p>
<p><strong>Alternative:</strong> Use <code>pnpm payload jobs:run --task importProducts --limit 1</code> from command line</p>
</div>
</div>
)
}
export default ProductImportTrigger
This component uses React 19's useActionState
hook to handle the server action with proper loading states. The interface is deliberately simple - one button to trigger the import, with clear feedback on what's happening.
The loading state prevents multiple simultaneous imports, which could cause database conflicts or resource exhaustion. The status messages use different styling for success and error states, making the outcome immediately clear to users.
The footer provides helpful context about the expected CSV format and mentions the CLI alternative for technical users who prefer command-line operations.
Step 6: Integrate with Collection Admin
Finally, we embed our import component into the Products collection admin interface, making it discoverable where users need it most.
// File: src/collections/Products.ts
export const Products: CollectionConfig = {
slug: 'products',
admin: {
useAsTitle: 'title',
defaultColumns: ['title', 'sku', 'category', 'hasVariants'],
description: 'Product management with bulk import capabilities',
group: 'Catalog',
listSearchableFields: ['title', 'sku', 'category'],
components: {
beforeList: ['/components/admin/ProductImportTrigger']
},
},
fields: [
{
name: 'title',
type: 'text',
required: true,
},
{
name: 'sku',
type: 'text',
required: true,
unique: true,
},
{
name: 'category',
type: 'text',
},
{
name: 'sourceUrl',
type: 'text',
},
{
name: 'hasVariants',
type: 'checkbox',
defaultValue: false,
},
{
name: 'inStock',
type: 'checkbox',
defaultValue: true,
},
{
name: 'shortDescription',
type: 'textarea',
},
],
}
The beforeList
component placement is strategic. It appears at the top of the products list view, making bulk import immediately visible when managing products. This follows the principle of putting functionality where users expect to find it.
The collection configuration also shows the field structure our import handler expects. Notice how the CSV columns map directly to these field names - this alignment makes the import logic straightforward and predictable.
Testing the Complete System
With all components in place, let's create a test CSV file to verify everything works:
name,variantColors,variantSizes,category,sourceUrl "Classic Terra Stone","Grey,Red,Black","10x10,20x10,20x20","Paving","https://example.com/terra" "Modern Brick","White,Brown","15x15,30x30","Tiles","https://example.com/brick" "Simple Paver","Blue","25x25","Decorative","https://example.com/paver"
Save this as /data/products.csv
in your project root. The first two rows will create products with multiple variants (9 and 4 respectively), while the third creates a simple product with one variant.
Navigate to your Products collection in the Payload admin. You should see the import component at the top of the page. Click "Start Import" and watch the progress feedback. Within seconds, you should see a success message with the exact counts of products and variants created.
Check the Products and Product Variants collections to verify the records were created correctly. Notice how the SKUs follow the normalized format: CLASSIC_TERRA_STONE-GREY-10X10
for variants and CLASSIC_TERRA_STONE
for base products.
If you run the import again, the deduplication logic will skip existing products, demonstrating that the system handles re-runs safely.
Key Concepts and Patterns
This implementation demonstrates several important patterns for Payload development. The queue-first approach provides reliability and user feedback that direct API calls can't match. Using the Local API ensures all validation rules and hooks execute properly, maintaining data integrity.
The error isolation pattern is crucial for bulk operations. By wrapping individual record creation in try-catch blocks, we ensure that one bad row doesn't stop the entire import. The comprehensive logging helps identify and fix data quality issues.
The deduplication strategy using unique identifiers (product names and variant SKUs) allows safe re-runs. This is essential for production systems where imports might need to be repeated or updated.
The admin UI integration makes the system accessible to non-technical users. By embedding import controls directly in the relevant collection views, we create an intuitive workflow that doesn't require technical knowledge or command-line access.
Extending the System
This foundation supports several natural extensions. You could add file upload capabilities to replace the hardcoded CSV path. Progress tracking could provide real-time updates for very large datasets. Validation rules could be enhanced to catch more data quality issues before processing.
The queue handler pattern works for any bulk operation - user imports, content migration, data synchronization, or batch processing tasks. The same architecture scales from dozens to thousands of records while maintaining reliability and user experience.
You now have a production-ready CSV import system that handles complex data relationships, provides excellent user experience, and integrates seamlessly with Payload's architecture. The combination of queue reliability, Local API integration, and thoughtful error handling creates a robust solution that will serve your projects well.
Let me know in the comments if you have questions about any part of the implementation, and subscribe for more practical Payload development guides.
Thanks, Matija