How to Upload Files to Cloudflare R2 in a Next.js App

A step-by-step guide to setting up Cloudflare R2 object storage in your Next.js app using presigned URLs and the AWS SDK.

·Matija Žiberna·
How to Upload Files to Cloudflare R2 in a Next.js App

While working on an AI-powered document parser for one of my side projects, I needed a way to let users upload images from their phones or desktops and store them securely in the cloud. At first, I considered using AWS S3, but I wanted something more cost-effective for outbound bandwidth, especially since the files were often accessed publicly. That's when I came across Cloudflare R2.

R2 is a drop-in S3-compatible object storage service that offers zero egress fees, which is ideal for applications that need to serve user-uploaded content. In this guide, I’ll walk you through how I integrated R2 into my Next.js application. The final setup uses presigned URLs for uploads, works well with server components, and is production-ready.

If you're building anything that lets users upload files—like a document scanner, CMS, or file manager—this guide should help you integrate Cloudflare R2 cleanly with your existing Next.js codebase.


Initial Setup

To get started, you need a few key packages. We'll use AWS's official SDK to interact with R2 since R2 is fully S3-compatible. We're also going to use nanoid to generate unique file names and optionally sonner for toast notifications during the upload process.

Run the following command to install everything:

pnpm add @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
pnpm add @aws-sdk/lib-storage nanoid sonner

Once that’s done, go into your Cloudflare dashboard and create a new R2 bucket. After that:

  1. Generate an API token with permission to Read and Write Objects.
  2. Note your Account ID, Access Key ID, and Secret Access Key. You’ll need them soon.
  3. Optionally, set up a custom domain for serving public files from R2, but this is not required for this guide.

Cloudflare dash


R2 Client Configuration

To talk to R2 from your server, we’ll configure an S3 client using AWS SDK. This is where most people run into trouble—especially with region-specific endpoints.

Create a new file: src/lib/r2-client.ts.

This file does three things:

  • Initializes the S3Client with the correct endpoint and credentials.
  • Exports configuration like bucket name and upload limits.
  • Provides utility functions for health checks and error handling.

Here's the key part to understand: R2 has two different endpoint formats, depending on your bucket region. If your R2 bucket is in the EU region, you need to use the .eu.r2.cloudflarestorage.com endpoint. Otherwise, it’s just .r2.cloudflarestorage.com.

The code below sets that automatically based on your environment variables:

import { S3Client, HeadBucketCommand, ListObjectsV2Command } from '@aws-sdk/client-s3';

const getR2Endpoint = () => {
  const accountId = process.env.R2_ACCOUNT_ID!;
  const isEuRegion = process.env.R2_REGION === 'eu' || process.env.R2_BUCKET_REGION === 'eu';
  return isEuRegion 
    ? `https://${accountId}.eu.r2.cloudflarestorage.com`
    : `https://${accountId}.r2.cloudflarestorage.com`;
};

export const r2Client = new S3Client({
  region: 'auto',
  endpoint: getR2Endpoint(),
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
  forcePathStyle: true,
});

export const R2_CONFIG = {
  bucketName: process.env.R2_BUCKET_NAME!,
  publicUrl: process.env.R2_PUBLIC_URL,
  maxFileSize: 10 * 1024 * 1024,
  allowedMimeTypes: ['image/jpeg', 'image/png', 'image/webp'],
  presignedUrlExpiry: 300,
};

// Optional utility to test connection
export async function testR2Connection(): Promise<boolean> {
  try {
    await r2Client.send(new HeadBucketCommand({
      Bucket: R2_CONFIG.bucketName,
    }));
    return true;
  } catch (error) {
    console.error('R2 connection test failed:', error);
    return false;
  }
}

// Convert R2 SDK errors into user-friendly messages
export function handleR2Error(error: any): string {
  if (error.name === 'NoSuchBucket') return 'Bucket does not exist.';
  if (error.name === 'InvalidAccessKeyId') return 'Invalid access key.';
  if (error.name === 'SignatureDoesNotMatch') return 'Authentication failed.';
  if (error.name === 'AccessDenied') return 'Access denied. Check your token permissions.';
  return `R2 operation failed: ${error.message || 'Unknown error'}`;
}

Environment Configuration

Next.js loads environment variables differently on the server and client. You’ll need to set them up carefully to avoid subtle bugs like broken image URLs or failed uploads.

In your .env.local file, add:

# Server-side variables
R2_ACCOUNT_ID=your_account_id
R2_ACCESS_KEY_ID=your_access_key_id
R2_SECRET_ACCESS_KEY=your_secret
R2_BUCKET_NAME=your_bucket
R2_PUBLIC_URL=https://your-domain.com

# Client-side (browser-safe)
NEXT_PUBLIC_R2_ACCOUNT_ID=your_account_id
NEXT_PUBLIC_R2_BUCKET_NAME=your_bucket
NEXT_PUBLIC_R2_PUBLIC_URL=https://your-domain.com

Make sure your .env.example also includes these so other developers don’t miss them. And if you update .env, remember that you need to restart your dev server for changes to take effect.


File Upload Implementation

The upload system has two parts:

  1. An API route that generates a presigned URL for secure uploading.
  2. A React hook that sends the file to that URL and tracks upload progress.

We’re using presigned URLs here, which means the actual file never touches your server. The server just signs the request. This works well in most modern apps and avoids CORS issues if configured correctly.

Create src/app/api/upload/presigned-url/route.ts and define the POST handler. The server will validate the request, generate a file path, and return a one-time upload URL:

import { NextRequest, NextResponse } from 'next/server';
import { auth } from '@clerk/nextjs/server';
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { r2Client, R2_CONFIG } from '@/lib/r2-client';
import { nanoid } from 'nanoid';

export async function POST(request: NextRequest) {
  try {
    // Authentication check (using Clerk in this example)
    const { userId } = auth();
    
    if (!userId) {
      return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
    }
    
    // Parse request body
    const { fileName, fileType, fileSize, documentType } = await request.json();
    
    // Validate file type
    if (!R2_CONFIG.allowedMimeTypes.includes(fileType)) {
      return NextResponse.json(
        { error: 'Invalid file type' },
        { status: 400 }
      );
    }
    
    // Validate file size
    if (fileSize > R2_CONFIG.maxFileSize) {
      return NextResponse.json(
        { error: 'File too large' },
        { status: 400 }
      );
    }
    
    // Generate unique file key
    const timestamp = new Date().toISOString().split('T')[0]; // YYYY-MM-DD
    const fileId = nanoid(10);
    const fileExtension = fileName.split('.').pop();
    const fileKey = `${userId}/${timestamp}/${documentType}_${fileId}.${fileExtension}`;
    
    // Create presigned URL for PUT operation
    const command = new PutObjectCommand({
      Bucket: R2_CONFIG.bucketName,
      Key: fileKey,
      ContentType: fileType,
      ContentLength: fileSize,
      Metadata: {
        userId,
        originalFileName: fileName,
        documentType,
        uploadedAt: new Date().toISOString(),
      },
    });
    
    const presignedUrl = await getSignedUrl(r2Client, command, {
      expiresIn: R2_CONFIG.presignedUrlExpiry,
    });
    
    return NextResponse.json({
      presignedUrl,
      fileKey,
      filePath: fileKey,
      expiresAt: Date.now() + R2_CONFIG.presignedUrlExpiry * 1000,
    });
    
  } catch (error) {
    console.error('Error generating presigned URL:', error);
    return NextResponse.json(
      { error: 'Failed to generate upload URL' },
      { status: 500 }
    );
  }
}

Next, we’ll write a React hook that you can use in any component where you want to upload files.

Create src/hooks/useFileUpload.ts. This hook handles:

  • Fetching the presigned URL
  • Uploading the file
  • Updating progress state
  • Returning results or errors
import { useState, useCallback } from 'react';
import { toast } from 'sonner';

interface UploadProgress {
  loaded: number;
  total: number;
  percentage: number;
}

interface UploadResult {
  success: boolean;
  fileUrl?: string;
  fileKey?: string;
  error?: string;
  documentId?: string;
}

interface UploadState {
  isUploading: boolean;
  progress: UploadProgress | null;
  error: string | null;
  result: UploadResult | null;
}

export function useFileUpload() {
  const [uploadState, setUploadState] = useState<UploadState>({
    isUploading: false,
    progress: null,
    error: null,
    result: null,
  });
  
  const uploadFile = useCallback(async (
    file: File,
    documentType: string,
    documentId?: string
  ): Promise<UploadResult> => {
    try {
      // Reset state
      setUploadState({
        isUploading: true,
        progress: null,
        error: null,
        result: null,
      });
      
      // Get presigned URL
      toast.info('Preparing upload...');
      const presignedResponse = await fetch('/api/upload/presigned-url', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          fileName: file.name,
          fileType: file.type,
          fileSize: file.size,
          documentType,
        }),
      });
      
      if (!presignedResponse.ok) {
        const errorData = await presignedResponse.json();
        throw new Error(errorData.error || 'Failed to get upload URL');
      }
      
      const { presignedUrl, fileKey, filePath } = await presignedResponse.json();
      
      // Upload to R2 with progress tracking
      toast.info('Uploading to cloud storage...');
      await uploadToR2(file, presignedUrl, (progress) => {
        setUploadState(prev => ({
          ...prev,
          progress,
        }));
      });
      
      // Generate file URL
      const fileUrl = R2_CONFIG.publicUrl 
        ? `${R2_CONFIG.publicUrl}/${fileKey}`
        : `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com/${R2_CONFIG.bucketName}/${fileKey}`;
      
      const result: UploadResult = {
        success: true,
        fileUrl,
        fileKey,
        documentId,
      };
      
      setUploadState({
        isUploading: false,
        progress: null,
        error: null,
        result,
      });
      
      toast.success('Upload completed successfully!');
      return result;
      
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : 'Upload failed';
      
      setUploadState({
        isUploading: false,
        progress: null,
        error: errorMessage,
        result: null,
      });
      
      toast.error(errorMessage);
      
      return {
        success: false,
        error: errorMessage,
      };
    }
  }, []);
  
  return {
    uploadState,
    uploadFile,
  };
}

// Helper function to upload file to R2 using presigned URL
async function uploadToR2(
  file: File,
  presignedUrl: string,
  onProgress: (progress: UploadProgress) => void
): Promise<void> {
  return new Promise((resolve, reject) => {
    const xhr = new XMLHttpRequest();
    
    xhr.upload.addEventListener('progress', (event) => {
      if (event.lengthComputable) {
        const progress = {
          loaded: event.loaded,
          total: event.total,
          percentage: Math.round((event.loaded / event.total) * 100),
        };
        onProgress(progress);
      }
    });
    
    xhr.addEventListener('load', () => {
      if (xhr.status === 200) {
        resolve();
      } else {
        reject(new Error(`Upload failed with status: ${xhr.status}`));
      }
    });
    
    xhr.addEventListener('error', () => {
      reject(new Error('Upload failed'));
    });
    
    xhr.open('PUT', presignedUrl);
    xhr.setRequestHeader('Content-Type', file.type);
    xhr.send(file);
  });
}

Integration with Frontend Components

To make this practical, here’s how I used the upload hook in a real component: a camera-based document capture UI.

After a user takes a photo or selects one from their device, the app:

  1. Creates a document entry in the database
  2. Uploads the image to R2
  3. Kicks off background AI processing

Here's the simplified version of that flow:

'use client';

import { useState } from 'react';
import { useRouter } from 'next/navigation';
import { toast } from 'sonner';
import { useFileUpload } from '@/hooks/useFileUpload';

export default function CameraPage() {
  const [isProcessing, setIsProcessing] = useState(false);
  const [processingStep, setProcessingStep] = useState<string>('');
  const { uploadFile, uploadState } = useFileUpload();
  const router = useRouter();

  const handleImageConfirmed = async (image: CapturedImage) => {
    try {
      setIsProcessing(true);
      setProcessingStep('Preparing upload...');
      
      // First, create a document entry in the database
      const documentResponse = await fetch('/api/documents', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          documentType: 'personal-document',
          retentionDays: null, // Keep forever by default
        }),
      });
      
      if (!documentResponse.ok) {
        throw new Error('Failed to create document entry');
      }
      
      const { documentId } = await documentResponse.json();
      
      // Upload the image to R2
      setProcessingStep('Uploading to cloud storage...');
      const uploadResult = await uploadFile(
        image.file,
        'personal-document',
        documentId
      );
      
      if (uploadResult.success) {
        setProcessingStep('Upload completed successfully!');
        
        // Show success message
        toast.success('Document uploaded successfully!', {
          description: `Document ID: ${documentId}. Processing will begin shortly.`
        });
        
        // Trigger background AI processing
        await triggerBackgroundProcessing(documentId);
        
        // Redirect to dashboard
        setTimeout(() => {
          router.push('/dashboard');
          router.refresh();
        }, 1500);
        
      } else {
        throw new Error(uploadResult.error || 'Upload failed');
      }
      
    } catch (error) {
      console.error('Failed to process image:', error);
      toast.error('Failed to process image. Please try again.');
    } finally {
      setIsProcessing(false);
    }
  };

  const triggerBackgroundProcessing = async (documentId: string) => {
    try {
      setProcessingStep('Scheduling AI processing...');
      
      // Trigger processing but don't wait for completion
      fetch('/api/documents/process', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
          documentId,
          enableDualVerification: false,
        }),
      }).catch((error) => {
        console.warn('Background processing request failed:', error);
      });
      
      setProcessingStep('Upload completed! Processing in background...');
      
    } catch (error) {
      console.error('Failed to schedule processing:', error);
      toast.error('Failed to schedule processing');
    }
  };

  // ... rest of component
}

You can adapt this flow to any use case: user avatars, document storage, receipts, etc.


Error Handling and Debugging

When things go wrong, it’s usually due to one of the following:

  • Environment variables missing
  • CORS misconfiguration
  • Wrong region endpoint
  • Incorrect credentials

To make debugging easier, you can add:

  • A test endpoint (/api/test/r2-connection) that runs a health check
  • A utility (debugR2Setup()) that runs a full set of validation checks for env and permissions

These are great to run locally before you deploy anything.


Critical Gotchas and Solutions

This integration works great, but only if everything is configured properly. Here are the most common issues I’ve run into and how to fix them:

Problem: “NoSuchBucket”

  • Cause: You're using the wrong endpoint for your bucket region.
  • Fix: Make sure your endpoint matches the URL in your Cloudflare dashboard (.eu.r2. for EU region buckets).

Problem: CORS issues with presigned URLs

  • Cause: Cloudflare dashboard CORS settings only apply to public URLs, not API endpoints.
  • Fix: Use server-side uploads instead of presigned URLs, or configure CORS using Wrangler CLI.

Problem: Image URLs are undefined/...

  • Cause: You're missing the NEXT_PUBLIC_ prefix on client environment variables.
  • Fix: Double-check all client-exposed values use NEXT_PUBLIC_.

Best Practices

Here are a few things I recommend based on working with this setup:

Organize your file paths

Use unique, predictable paths based on user ID and date. This keeps your bucket tidy and easy to clean up later.

function generateFileKey(userId: string, documentType: string, fileName: string): string {
  const timestamp = new Date().toISOString().split('T')[0];
  const fileId = nanoid(10);
  const fileExtension = fileName.split('.').pop();
  return `${userId}/${timestamp}/${documentType}/${fileId}.${fileExtension}`;
}

Enforce file type and size limits

Don't rely only on the client to enforce limits. Check on the server too.

Use multipart uploads for large files

For anything over 5MB, AWS recommends multipart uploads using @aws-sdk/lib-storage.

Separate dev and prod buckets

Use process.env.NODE_ENV to append -dev to your bucket in development.


Testing Your Implementation

Before going live, I recommend the following steps:

  1. Test connection at /api/test/r2-connection
  2. Upload a small image and verify the file appears in your R2 dashboard
  3. Try accessing the file using both public URL and presigned URL
  4. Watch the Network tab in dev tools for any failed requests

TL;DR – Minimum Setup

If you're skimming, here's what you absolutely need:

  1. Install AWS SDK and configure a S3Client with R2 endpoint
  2. Add .env.local with credentials and public URL
  3. Create an API route to return presigned upload URLs
  4. Upload files via client-side fetch(presignedUrl, { method: 'PUT', body: file })
  5. Use either a public URL or proxy route to access uploaded files

Conclusion

Cloudflare R2 is a solid choice for storing user-uploaded files, especially when you're looking to avoid egress fees and keep things simple with S3-compatible tooling. Integrating it into a Next.js app takes a bit of careful setup, especially around region-specific endpoints, CORS behavior, and environment variables—but once it's in place, it's fast, reliable, and easy to maintain.

I wrote this guide based on the exact steps I followed when building a document uploader for an AI parser app. If you're building anything similar, I hope this helps you skip the usual gotchas and get straight to a working integration.

If you run into anything unexpected or have improvements, feel free to reach out or suggest changes. I'm always happy to hear how others approach this.

Thanks, Matija

3

Comments

Enjoyed this article?
Subscribe to my newsletter for more insights and tutorials.
Matija Žiberna
Matija Žiberna
Full-stack developer, co-founder

I'm Matija Žiberna, a self-taught full-stack developer and co-founder passionate about building products, writing clean code, and figuring out how to turn ideas into businesses. I write about web development with Next.js, lessons from entrepreneurship, and the journey of learning by doing. My goal is to provide value through code—whether it's through tools, content, or real-world software.

You might be interested in