- OpenNext on AWS: The Honest 2026 Guide to Self-Hosting
OpenNext on AWS: The Honest 2026 Guide to Self-Hosting
Practical architecture, ISR, image optimization, cold-start strategies, and Amplify comparison for Next.js deployments…
You are viewing this article before its public release.
This goes live on March 29, 2026 at 6:00 AM.

⚡ Next.js Implementation Guides
In-depth Next.js guides covering App Router, RSC, ISR, and deployment. Get code examples, optimization checklists, and prompts to accelerate development.
OpenNext lets you deploy a Next.js application to AWS — Lambda for SSR, S3 for static assets, CloudFront for the CDN layer — without Vercel. It works, it is used in production, and the ecosystem around it matured significantly in 2025 and early 2026 thanks to the Next.js Deployment Adapter API initiative. That said, it is not a managed platform. Running Next.js on AWS through OpenNext means owning a distributed serverless architecture, and that comes with real operational weight. This guide covers what OpenNext actually is under the hood, what the AWS architecture looks like in practice, what works well today, what is still genuinely rough, and how it compares to AWS Amplify Gen 2 — so you can make a clear decision rather than discover surprises in production.
I have been tracking this ecosystem closely while evaluating non-Vercel deployment options for Next.js projects. The research below reflects the state of OpenNext as of March 2026, including the Adapter API progress, the AWS and Cloudflare adapter status, and real developer feedback on where the pain actually lives.
What OpenNext Is (and What It Is Not)
OpenNext is not a hosting platform. There is no dashboard, no managed service, no support contract. It is a build-time translation layer. When you run a Next.js build, the output is designed to run on Vercel's infrastructure — it assumes specific primitives for edge functions, ISR storage, image optimization, and caching that AWS does not provide natively. OpenNext takes that build output and reshapes it into components that AWS services can actually run.
The output of an OpenNext build looks roughly like this: a server-side rendering Lambda function, a separate image optimization Lambda, a middleware function, a static assets bundle for S3, and a set of CloudFront configuration requirements. Each piece maps to a Next.js concern. The SSR Lambda handles server components and API routes. The image optimization Lambda runs next/image processing. The middleware function handles route matching and rewrites. S3 holds everything that does not need a runtime. CloudFront sits in front of all of it.
SST (the Serverless Stack framework) and OpenNext have a close relationship that is worth clarifying. OpenNext is the adapter — the thing that transforms the build. SST is a deployment framework that uses OpenNext under the hood and provisions the AWS infrastructure for you. They are separate projects with separate repositories and separate release cycles. When you read about "SST v3 + Next.js", that is SST using OpenNext to handle the Next.js-specific translation. If you want to deploy OpenNext without SST, you can, but you take on the infrastructure provisioning yourself via CDK, Terraform, or the AWS console.
The AWS Architecture in Practice
Understanding the component model matters before you commit to this path. Here is what actually gets deployed.
The SSR Lambda handles any request that requires server-side rendering — React Server Components, dynamic routes, server actions, API routes. This is a standard Node.js Lambda with a custom handler that OpenNext provides. It receives forwarded requests from CloudFront and returns full HTML or JSON responses.
The image optimization Lambda is a separate function that handles next/image requests. Next.js assumes a dedicated image optimization service exists at /_next/image. On Vercel this is built in. On AWS, OpenNext deploys a Lambda that accepts image requests, fetches the source image, resizes and converts it, and returns the optimized result. This function can become a bottleneck under heavy image load if it is under-provisioned.
The middleware function runs your middleware.ts logic. In the OpenNext architecture this runs in the same Lambda environment as SSR or as a lightweight edge function depending on your configuration. It handles authentication checks, redirects, rewrites, and locale detection before requests reach the main renderer.
S3 stores all statically generated output — /_next/static/, pre-rendered HTML pages, and any files from your public/ directory. CloudFront serves these directly without hitting Lambda. This is where the cost efficiency of self-hosting comes from: static-heavy apps pay almost nothing for compute.
CloudFront routes traffic across all of these components. Static requests go to S3. Dynamic requests and image optimization requests get forwarded to the appropriate Lambda. The routing configuration — which behaviors hit which origins — is the most complex part of the initial setup and the most common source of misconfiguration errors.
The ISR revalidation queue is the piece that catches people off guard. Incremental Static Regeneration on AWS requires coordinating between the SSR Lambda (which detects that a page needs revalidation), a queue (SQS or similar), and a revalidation worker that regenerates the page and writes the new HTML back to S3, then invalidates the CloudFront cache. On Vercel this is invisible. On AWS you own this coordination layer. OpenNext provides the scaffolding, but you need to understand what it is doing and monitor it.
What Works Well Today
OpenNext's AWS adapter has broad feature coverage for the most common Next.js patterns. Here is an honest status table:
| Feature | Status | Notes |
|---|---|---|
| App Router (Server Components) | Supported | Primary focus of OpenNext development |
| API Routes | Supported | Handled through the SSR Lambda |
| Static generation (SSG) | Supported | S3 + CloudFront, no Lambda involved |
| ISR — time-based revalidation | Supported | Works via revalidation queue + S3 write |
| ISR — on-demand revalidation | Supported | revalidatePath and revalidateTag work |
| Streaming responses | Supported | Lambda streaming responses supported |
| Image optimization | Supported | Dedicated Lambda handler |
| Middleware | Supported | Runs in Lambda or edge-compatible runtime |
| Partial Prerendering (PPR) | Limited | Requires advanced CDN behaviours not trivial to replicate on CloudFront |
| Pages Router | Supported | Though the ecosystem prioritises App Router |
ISR deserves more attention than a table row. On-demand revalidation using revalidateTag and revalidatePath works, but the cache tagging infrastructure requires DynamoDB for tag-to-page mapping by default. This is an additional AWS resource you need to provision, monitor, and pay for. It is not difficult, but it is not automatic either. Cache invalidation latency is also higher than Vercel's because CloudFront invalidation has a propagation delay measured in seconds to minutes rather than being near-instant.
What Is Still Rough
This is the section that is missing from every vendor-maintained resource. Three areas have real operational weight.
Cold starts. Every Lambda function has a cold start problem — the first invocation after a period of inactivity spins up a new execution environment, which takes time. For the SSR Lambda this can be 500ms to 1500ms depending on the bundle size and the Node.js runtime initialisation. There are mitigation strategies: Lambda Provisioned Concurrency keeps instances warm but costs money even when idle; scheduled warmer functions hit the Lambda on a timer to keep it active; splitting the bundle smaller reduces initialisation time. None of these are automatic. On Vercel, warm infrastructure is managed for you. On AWS, you plan for this explicitly.
ISR caching misconfiguration. The most common production failure mode reported by developers is stale content caused by a misconfigured revalidation pipeline. The sequence — detect stale page, queue revalidation, write new HTML to S3, invalidate CloudFront — has several failure points. If the revalidation worker Lambda has insufficient permissions to write to S3, revalidation silently fails. If CloudFront invalidation is not triggered after the S3 write, the old cached response keeps serving. If the DynamoDB tag table gets out of sync, tag-based revalidation stops working. Monitoring this requires CloudWatch alarms across multiple services.
Upgrade and maintenance rhythm. OpenNext tracks Next.js closely, but it is not an official Vercel product. When Next.js ships a breaking internal change, there is a lag before the OpenNext adapter catches up. This has historically caused periods where upgrading Next.js breaks the deployment until a new OpenNext version is released. The Adapter API initiative (described below) is specifically designed to eliminate this problem, but it is not yet fully resolved in 2026.
The Adapter API: What It Means for Production Confidence
The Next.js Deployment Adapter API is an initiative to give platforms a stable, documented integration point with Next.js — so they are no longer reverse-engineering internal build output and fragile implementation details. Before this initiative, OpenNext essentially worked by inspecting the .next build directory and reconstructing what it found into AWS-deployable components. When Vercel changed internals (which they do frequently), OpenNext broke.
The Adapter API creates a formal contract between Next.js and deployment platforms. Platforms implement the adapter interface; Next.js guarantees stability on its side of that interface. This is genuinely important progress.
As of early 2026, this initiative is advancing but not finalized. The API is an evolving specification rather than a fully stable, versioned contract. The AWS adapter through OpenNext is not yet a "verified adapter" in the official Next.js sense — that designation, which implies passing the full Next.js test suite, is expected later in 2026. For teams evaluating OpenNext right now, this means the integration is meaningfully more stable than it was in 2023–2024, but you should still test your specific Next.js version against the current OpenNext release before upgrading either in production.
OpenNext vs AWS Amplify Gen 2
AWS Amplify Gen 2 is Amazon's own managed deployment platform for Next.js, and the most common comparison when you are already AWS-native. Here is how they stack up:
| OpenNext on AWS | AWS Amplify Gen 2 | |
|---|---|---|
| Deployment model | DIY serverless (Lambda + S3 + CloudFront) | Managed (Amplify Console) |
| Infrastructure control | High — full ownership of every component | Low — abstracted by Amplify |
| App Router support | Full | Full |
| ISR — on-demand revalidation | Supported | Limited (primarily time-based) |
| Streaming support | Full | Partial |
| Image optimization | Dedicated Lambda handler, configurable | Built-in but less configurable |
| Middleware | Supported | Partial |
| Cold start behaviour | Variable — requires mitigation strategy | More consistent (managed) |
| Operational complexity | High | Low |
| Setup time | Days to weeks | Minutes to hours |
| Cost at scale | Pay-per-use, typically 10–30% cheaper at high traffic | Tiered, more predictable |
| Debugging | Distributed — requires CloudWatch, X-Ray | Integrated Amplify tooling |
| Vendor lock-in | Medium (AWS-specific but portable across services) | High (Amplify-specific) |
The decision framework is straightforward. Choose OpenNext when your team has strong AWS and DevOps capability, your application uses advanced ISR patterns (tag-based revalidation, on-demand invalidation), and you are optimising cost at meaningful traffic scale. Choose Amplify Gen 2 when you want a managed deployment experience, your ISR requirements are simple, and you do not have bandwidth to own the infrastructure layer.
For teams currently using Amplify Gen 2 and running into limitations with on-demand revalidation or middleware, OpenNext is the natural migration path. For teams starting fresh with an MVP, Amplify Gen 2 is almost always the right starting point and you can migrate later if the limitations become real problems.
If you are self-hosting a Payload CMS application alongside your Next.js frontend, the infrastructure considerations expand further — this guide on deploying Payload CMS with Next.js in a self-hosted setup covers the Docker, Nginx, and database configuration that runs alongside whatever Next.js deployment strategy you choose.
OpenNext and Cloudflare: A Note on the Growing Alternative
Searches for opennext cloudflare grew 80% year-over-year in the twelve months to early 2026. It is worth understanding why.
The Cloudflare adapter for OpenNext targets Cloudflare Workers rather than Lambda. The architectural model is different: Workers run at the edge globally with no cold start problem in the Lambda sense, and Cloudflare's caching layer is deeply integrated rather than being a separate CDN configuration. The trade-off is that Workers have a different runtime environment with constraints (no Node.js-specific APIs, different memory model) and the ISR support is less mature than the AWS adapter.
The Cloudflare adapter is maintained with significant Cloudflare input alongside the OpenNext project but is a separate codebase from the AWS adapter. If your priority is edge-first global performance and simpler operational overhead, the Cloudflare path is worth evaluating. If you are AWS-native, already have IAM, VPC, and CloudWatch in place, and need mature ISR support, the AWS adapter is the better-supported choice.
Should You Use It Today?
Here is the honest guidance.
Use OpenNext on AWS today if your team is comfortable with distributed serverless architecture on AWS, your application needs advanced ISR with on-demand revalidation, and you have capacity to invest in monitoring setup across Lambda, S3, CloudFront, and SQS. The functionality is there and it is used in real production deployments. The operational overhead is real but manageable with proper planning.
Wait or use Amplify Gen 2 if you want a deployment that is production-stable without infrastructure investment, your ISR requirements are basic, or your team does not have AWS DevOps depth. The verified adapter milestone expected later in 2026 will be a meaningful signal — once the AWS adapter passes the full Next.js test suite and receives official designation, the upgrade stability concern goes away.
The ecosystem is moving in a clear direction. The Adapter API initiative will formalize what has been informal. Verified adapter status will eliminate the upgrade fragility that is currently the biggest argument against OpenNext. If you start building on OpenNext now with an understanding of its current limitations, you are positioning well for what the ecosystem looks like in twelve months.
FAQ
Does OpenNext support Next.js 15 and 16?
The OpenNext AWS adapter tracks the latest Next.js releases. Check the OpenNext GitHub for the current compatibility matrix before upgrading Next.js in production — there is typically a short lag between a Next.js release and a verified OpenNext update.
Is SST required to use OpenNext?
SST is not required. SST is a deployment framework that uses OpenNext internally and handles the AWS infrastructure provisioning for you. You can use OpenNext directly and provision the infrastructure yourself with CDK, Terraform, or CloudFormation. SST is the easiest path; direct OpenNext gives you more control.
How do I handle cold starts in production?
The practical options are Lambda Provisioned Concurrency (keeps instances warm but costs money at idle), a scheduled EventBridge rule that pings the SSR Lambda every few minutes, and bundle size reduction to minimise initialisation time. For most production apps, a scheduled warmer hitting the Lambda every 5 minutes is the lowest-cost approach that keeps p99 latency acceptable.
Can I use OpenNext with Payload CMS on the same deployment?
Yes, though Payload CMS adds its own architectural considerations. Payload runs as a Node.js server, so you need a persistent compute layer (ECS, EC2, or a long-running Lambda with reserved concurrency) alongside the serverless Next.js layer. The Payload CMS jobs and worker role separation guide covers the web/worker architecture pattern that translates well to an AWS multi-service setup.
What monitoring should I set up from day one?
At minimum: a CloudWatch alarm on SSR Lambda error rate and duration, an alarm on image optimization Lambda throttling, an S3 event notification confirming revalidation writes are succeeding, and a CloudFront cache hit rate metric. If cache hit rate drops unexpectedly, it usually means your revalidation pipeline is broken and every request is hitting Lambda.
Conclusion
OpenNext gives you genuine control over how Next.js runs on AWS — Lambda for SSR, S3 for static assets, CloudFront for CDN, with full ownership of ISR coordination, image optimization, and middleware routing. That control comes with operational responsibility that managed platforms abstract away. The ecosystem matured significantly with the Adapter API initiative and is moving toward verified adapter status for AWS in 2026, which will resolve the upgrade fragility concern that has been the strongest argument for staying on managed infrastructure.
The article walked through the full architecture, the honest feature status, the four real production failure modes to plan for, and a direct comparison with AWS Amplify Gen 2. If you are evaluating this path seriously, the decision hinges on two things: your team's AWS operational depth, and whether your application actually needs the advanced ISR and infrastructure control that OpenNext provides and Amplify does not.
Let me know in the comments if you have questions, and subscribe for more practical development guides.
Thanks, Matija