How to Reduce Vercel Fast Origin Traffic by 95% Using ISR
Convert dynamic pages to static with ISR, client-side data loading, and selective revalidation

Last month, I was shocked to discover my simple Next.js blog was consuming 280MB of Fast Origin Transfer data on Vercel's Hobby tier with just 90 clicks per day. That's over 3MB per visitor on what should be a lightweight static site. After investigating the issue, I realized I had fundamentally misunderstood how Vercel's edge caching works and was forcing unnecessary origin requests. This guide shows you exactly what Fast Origin Transfer means, why even static sites can have high usage, and how I reduced my blog's origin traffic by 95% using Incremental Static Regeneration.
Understanding Fast Origin Transfer
The confusion around Fast Origin Transfer stems from a common misconception about how CDNs work. When you see 280MB of Fast Origin Transfer data, that's not your total bandwidth usage - it's specifically the traffic between Vercel's edge locations and your application's origin server.
Here's how Vercel's edge network actually operates: when you deploy a Next.js application, your static assets (HTML files, JavaScript bundles, CSS, images, and fonts) are distributed to Vercel's global edge network. However, these files aren't automatically cached at every edge location worldwide. Instead, edge servers only cache content when visitors from their region actually request it.
When a visitor from Vienna accesses your site for the first time, the Vienna edge server doesn't have your assets cached yet. It must fetch everything from your origin server - your complete site bundle. This origin-to-edge transfer is what counts against your Fast Origin Transfer quota. Once cached, subsequent visitors from that region are served directly from the Vienna edge server without hitting your origin.
The key insight is that Fast Origin Transfer measures cache misses, not total traffic to users. A site with many repeat visitors from established regions will have low Fast Origin usage, while a site attracting new visitors from diverse geographic locations will see higher origin traffic as edge caches are warmed up globally.
Why My Static Blog Had High Origin Traffic
My blog appeared to be a perfect candidate for static generation. It's built with Next.js, serves markdown content from Sanity CMS, and updates maybe once or twice per week. In theory, every page should be cached indefinitely at edge locations, resulting in minimal Fast Origin Transfer.
The reality was quite different. Despite generating static HTML at build time, my blog was consuming 3MB per visitor in origin traffic. After analyzing my Vercel deployment configuration, I discovered several critical issues that were preventing proper edge caching.
The primary culprit was a single line in my blog post page component:
// File: src/app/blog/[slug]/page.tsx
export const dynamic = 'force-dynamic'
This configuration forced every blog post request to be processed dynamically on the server, completely bypassing static generation. Instead of serving pre-built HTML files from edge locations, Vercel was routing every request back to my origin server to generate pages on-demand.
The reason I had added force-dynamic
was to support server-side comment loading. My blog includes a custom comment system that fetches data during server-side rendering to display existing comments immediately when pages load. While this provided a better user experience by eliminating loading states, it came at the cost of making every page dynamic.
The second issue was related to how frequently my cache was being invalidated. I had implemented a daily cron job that triggered a complete site rebuild through Vercel's build hook system. While this ensured content freshness, it also invalidated cached assets at every edge location worldwide, forcing the next visitor in each region to re-fetch everything from origin.
Converting Comments to Client-Side Loading
The first step in reducing Fast Origin Transfer was eliminating the force-dynamic
configuration. This required changing how my blog handles comment data loading from server-side to client-side. I had previously built a custom comment system for my Next.js blog using Sanity CMS that stored comments as Sanity documents and displayed them through server-side rendering.
// File: src/app/blog/[slug]/page.tsx (Before)
export const dynamic = 'force-dynamic'
export default async function BlogPostPage({ params }: { params: { slug: string } }) {
const post = await sanityFetch({ query: POST_QUERY, params })
const comments = await getAllComments(post._id) // Server-side comment fetching
return (
<article>
<BlogContent post={post} />
<Comments comments={comments} />
</article>
)
}
The server-side approach required dynamic rendering because comment data changes frequently and couldn't be determined at build time. By moving comment loading to the client side, I could make blog posts completely static while preserving the comment functionality.
// File: src/app/blog/[slug]/page.tsx (After)
export const revalidate = 604800 // 7 days in seconds - note: must be a literal number, not 60 * 60 * 24 * 7
export default async function BlogPostPage({ params }: { params: { slug: string } }) {
const post = await sanityFetch({ query: POST_QUERY, params })
return (
<article>
<BlogContent post={post} />
<Comments postId={post._id} />
</article>
)
}
The new approach enables static generation with a weekly revalidation fallback. Blog posts are generated as static HTML at build time, then cached indefinitely at edge locations. The comment component loads its data client-side after the page renders, preserving the functionality while allowing the main content to be served from the CDN.
// File: src/components/comments/Comments.tsx
'use client'
import { useState, useEffect } from 'react'
export function Comments({ postId }: { postId: string }) {
const [comments, setComments] = useState([])
const [loading, setLoading] = useState(true)
useEffect(() => {
fetch(`/api/comments/${postId}`)
.then(res => res.json())
.then(data => {
setComments(data.comments || [])
setLoading(false)
})
.catch(() => setLoading(false))
}, [postId])
if (loading) {
return <CommentsSkeleton />
}
return <CommentList comments={comments} />
}
This client-side loading approach provides progressive enhancement. Users see the main blog content immediately from the edge cache, then comments load asynchronously. The loading skeleton ensures the user experience remains smooth while preserving the performance benefits of static generation.
Implementing Incremental Static Regeneration
With dynamic rendering eliminated, I could implement proper static generation with ISR. This approach generates all blog posts as static HTML at build time, then serves them from edge locations with minimal origin requests.
// File: src/app/blog/[slug]/page.tsx
export const revalidate = 604800 // Weekly revalidation as fallback
export async function generateStaticParams() {
const allPosts = await sanityFetch({
query: POSTS_QUERY,
tags: ['post']
})
return allPosts.map((post) => ({
slug: post.slug?.current,
}))
}
The generateStaticParams
function tells Next.js to pre-generate static pages for all existing blog posts at build time. Combined with the revalidate
configuration, this creates a hybrid approach where pages are static by default but can be updated when necessary.
The weekly revalidation serves as a safety net, ensuring that even if my webhook system fails, content will eventually refresh. However, the real magic happens with on-demand revalidation triggered by content management system webhooks.
Instead of daily full site rebuilds, I implemented selective revalidation that only updates pages when their content actually changes. This required setting up a secure webhook endpoint that Sanity CMS could trigger whenever blog content is modified.
// File: src/app/api/revalidate/route.ts
import { NextRequest, NextResponse } from 'next/server'
import { revalidatePath } from 'next/cache'
export async function POST(request: NextRequest) {
const { _type, slug } = await request.json()
if (_type === 'post' && slug?.current) {
// Only revalidate the specific post that changed
revalidatePath(`/blog/${slug.current}`)
revalidatePath('/blog') // Update blog listing
return NextResponse.json({
revalidated: true,
paths: [`/blog/${slug.current}`, '/blog']
})
}
return NextResponse.json({ message: 'No revalidation needed' })
}
The webhook is configured in Sanity Studio to trigger only for relevant content types (posts, categories, authors) using GROQ filters, and includes signature verification for security. For the complete webhook implementation including authentication and GROQ optimization, see my detailed guide on creating secure Sanity CMS webhooks with Next.js App Router.
This selective approach means that updating a single blog post only invalidates that specific page and the blog listing, rather than forcing a complete cache refresh across all edge locations.
Measuring the Impact
The results of converting to proper static generation with ISR were dramatic. Before the optimization, my blog was generating approximately 280MB of Fast Origin Transfer with just 90 visitors. After implementing static generation and eliminating unnecessary cache invalidation, the same traffic pattern resulted in less than 15MB of origin transfer.
The improvement breaks down as follows: the first visitor to each edge region still triggers the initial cache warming, pulling the complete site bundle from origin. However, subsequent visitors from that region are served entirely from the edge cache, resulting in zero additional Fast Origin Transfer.
With the previous dynamic rendering approach, every single page request required origin processing, regardless of whether the visitor was new or returning. The static generation approach means that popular content regions quickly build up cached assets, dramatically reducing origin requests over time.
The build output clearly shows the transformation:
Route (app) Size First Load JS
├ ● /blog/[slug] 24.8 kB 442 kB
├ ├ /blog/payload-cms-instant-development-workflow
├ ├ /blog/nextjs-builder-io-server-data-components
├ └ [+117 more paths]
● (SSG) prerendered as static HTML (uses generateStaticParams)
The ●
symbol indicates static site generation, confirming that all 120 blog posts are now pre-generated as static HTML files. These files can be cached indefinitely at edge locations, serving visitors without origin requests.
Long-Term Edge Caching Benefits
The true power of this approach becomes apparent over time as edge caches remain warm across different regions. Since blog content changes infrequently, cached assets stay valid for extended periods. A blog post published months ago continues serving visitors from edge locations without ever hitting the origin server again.
This creates a virtuous cycle where growing traffic actually improves the Fast Origin Transfer ratio. As more visitors access the site from established regions, a higher percentage of requests are served from cache, reducing the average origin transfer per visitor.
The contrast with the previous daily rebuild approach is stark. Those complete cache invalidations meant that every edge location had to re-fetch all assets daily, regardless of whether content had actually changed. With selective revalidation, only pages with actual updates trigger origin requests.
For a blog publishing 1-2 posts per week, this means 95% of the content remains cached at edges indefinitely. Only the new posts and updated blog listing require origin transfer, while the vast majority of page views are served directly from the CDN.
Production Considerations
When implementing ISR for Fast Origin Transfer optimization, several factors can impact your results. Image optimization plays a crucial role - ensure you're using Next.js Image component rather than standard HTML img tags. The Image component automatically optimizes images through Vercel's edge network, reducing both file sizes and origin requests.
Cache headers configuration affects how long assets remain cached at edge locations. While ISR handles page-level caching automatically, custom cache headers on API routes and static assets can further reduce origin transfer by extending cache durations for stable content.
Monitoring your Fast Origin Transfer patterns helps optimize the strategy over time. Vercel's analytics dashboard shows which pages and assets are consuming origin bandwidth, allowing you to identify opportunities for further optimization.
Geographic traffic patterns influence caching effectiveness. A site with visitors concentrated in a few regions will see better Fast Origin Transfer ratios than one with globally distributed traffic, since edge caches warm up more efficiently with regional concentration.
Conclusion
Reducing Vercel Fast Origin Transfer by 95% required understanding the fundamental difference between total bandwidth and origin requests. My blog's high usage wasn't due to heavy content or large asset sizes, but rather poor caching strategy that forced every request to hit the origin server.
The combination of proper static generation, client-side data loading for dynamic features, and selective cache invalidation created a system where content is served primarily from edge locations. Instead of 280MB origin transfer for 90 visitors, the same traffic pattern now generates less than 15MB of origin requests.
This optimization approach scales naturally with traffic growth. As more visitors access cached content from established edge locations, the Fast Origin Transfer ratio continues improving. The key insight is that static content should remain static, with dynamic features implemented through client-side loading rather than server-side rendering.
For developers hitting Hobby tier limits or optimizing for cost efficiency, auditing your dynamic rendering usage and implementing proper ISR can provide dramatic improvements in both performance and resource consumption. Let me know in the comments if you have questions about optimizing your Fast Origin Transfer usage, and subscribe for more practical performance guides.
Thanks, Matija