- PgBouncer Transaction Mode: Fix Postgres Connection Error
PgBouncer Transaction Mode: Fix Postgres Connection Error
Why switching PgBouncer to transaction mode resolves Payload CMS 'remaining connection slots' errors and how to…

📚 Get Practical Development Guides
Join developers getting comprehensive guides, code examples, optimization tips, and time-saving prompts to accelerate their development workflow.
Fix PostgreSQL Error 53300 in Payload CMS: Why PgBouncer Transaction Mode is the Real Answer
If you are running Payload CMS on self-hosted PostgreSQL and you hit this error:
FATAL: remaining connection slots are reserved for roles with the SUPERUSER attribute
the fix is not increasing max_connections. The fix is putting PgBouncer in front of PostgreSQL and running it in transaction mode. App-side connection pooling — which Payload already includes through node-postgres — handles query reuse inside a single Node process. PgBouncer handles the system-wide connection pressure that app-side pooling cannot see. This article walks through exactly why this error happens on self-hosted setups, why managed providers like Neon rarely show it, and the specific PgBouncer configuration that resolved it in production.
I was running a Payload CMS application backed by self-hosted PostgreSQL when the app started failing under what looked like light load. A small number of concurrent admin sessions was enough to bring it down. The error was always the same: remaining connection slots are reserved for roles with the SUPERUSER attribute.
My first instinct was to look at the Payload pool settings. I lowered max in the pool config. I restarted the app. Things got better for a while, then degraded again. The pool settings were not the root of the problem.
What was actually happening was that PgBouncer was already in the stack — but running in the wrong mode. Once I switched it from session mode to transaction mode, the problem went away completely.
What the error actually means
FATAL: remaining connection slots are reserved for roles with the SUPERUSER attribute
PostgreSQL maintains a hard ceiling on open connections, set by max_connections. A small number of those slots are always reserved for superuser roles so that database administrators can connect even during incidents. When the remaining non-superuser slots are full, PostgreSQL rejects any new connection attempt with this error.
So the immediate cause is simple: too many live connections, not enough slots, one more request comes in, and Postgres refuses it.
In a Payload CMS application this can happen during:
- admin panel usage that triggers multiple simultaneous queries
- bursts of API requests
- hot reload cycles during development
- multiple app containers running at once
- build steps or script execution
- stale sessions that were never closed
The instinct is to raise max_connections. That creates more headroom temporarily. The underlying pressure stays, and the error comes back.
Why this often does not appear on Neon — and does appear on self-hosted Postgres
Managed PostgreSQL providers like Neon absorb a lot of the connection management complexity before the problem ever reaches your application layer. Most of them either include a connection pooler as part of the platform or encourage you to use one through their default connection flow. That means the database sees a bounded, stable number of backend connections even when your application is bursty.
When you move to self-hosted PostgreSQL, that layer is gone. You have to build it yourself. If you do not, every application process opens its own pool directly against PostgreSQL, and the total connection count across all processes adds up faster than expected.
That is not a Payload problem. It is an infrastructure gap that managed platforms quietly filled.
Why app-side pooling is not enough on its own
Payload uses @payloadcms/db-postgres, which relies on Drizzle, which in turn relies on node-postgres. And node-postgres does include connection pooling. So there is pooling in the stack already — it just cannot solve the full problem by itself.
App-side pooling through node-postgres works inside a single Node.js process. It prevents creating a brand new PostgreSQL connection on every query, which would be extremely wasteful. That is exactly what it is designed to do.
What it cannot control is the connection count across the whole system:
- multiple app containers, each with their own pool
- admin traffic hitting the database while API requests are in flight
- worker processes or background jobs
- build steps executing at deploy time
- idle sessions that linger after a request finishes
A pool of max: 5 in one process is small. That same pool setting across ten containers is fifty real PostgreSQL connections before a single query is executed. Add burst traffic and you hit the ceiling fast.
App-side pooling and database connection brokering are different layers solving different problems. PgBouncer operates at the system level in a way that node-postgres cannot.
The fix: PgBouncer in transaction mode
PgBouncer sits between your application and PostgreSQL. Your app connects to PgBouncer; PgBouncer connects to PostgreSQL. From the application's perspective, it looks like a normal Postgres connection. From PostgreSQL's perspective, it sees only the connections PgBouncer holds on behalf of all clients.
The critical configuration decision is pool_mode. PgBouncer supports three modes: session, transaction, and statement. Session mode is the default. Transaction mode was the fix.
Session mode vs. transaction mode
| Mode | Backend connection held | Good fit for | Downside |
|---|---|---|---|
| Session | Entire client session lifetime | Apps needing persistent session-level state (advisory locks, LISTEN/NOTIFY, SET config) | Backend pool fills quickly under bursty web traffic |
| Transaction | Duration of a single transaction only | Bursty web apps with short DB interactions (Next.js, Payload, API handlers) | Incompatible with session-level Postgres features |
In session mode, each client connection reserves a backend PostgreSQL connection for as long as that client is connected. For a web application where connections come and go quickly, this is a poor fit. The backend pool slots fill up and stay filled even when no queries are actually running.
In transaction mode, a backend connection is borrowed from the pool only while a transaction is executing. As soon as the transaction commits or rolls back, the connection goes back and is immediately available for another client. A small backend pool can serve many more concurrent clients efficiently.
For Payload CMS running alongside Next.js with admin panel traffic and API requests, transaction mode is the correct fit.
The PgBouncer configuration that fixed it
Here is the relevant section of a working pgbouncer.ini for this setup:
# File: pgbouncer.ini
[databases]
app_db = host=localhost port=5432 dbname=app_db
[pgbouncer]
pool_mode = transaction
max_client_conn = 100
default_pool_size = 20
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
The key line is pool_mode = transaction. Everything else is standard setup.
max_client_conn controls how many client-side connections PgBouncer will accept. default_pool_size controls how many backend PostgreSQL connections PgBouncer will maintain. With transaction mode, those twenty backend connections can serve many more than twenty concurrent clients, because connections are released back to the pool after each transaction.
Connecting Payload to PgBouncer
Your DATABASE_URL should point to PgBouncer's port (6432 by default), not directly to PostgreSQL (5432):
postgresql://app_user:app_password@localhost:6432/app_db
With PgBouncer handling the multiplexing, the Payload pool size can stay small:
// File: payload.config.ts
import { postgresAdapter } from '@payloadcms/db-postgres'
export default buildConfig({
db: postgresAdapter({
pool: {
connectionString: process.env.DATABASE_URL || '',
max: 5,
idleTimeoutMillis: 10000,
connectionTimeoutMillis: 2000,
},
}),
// ...
})
max: 5 is reasonable here. PgBouncer is the layer absorbing burst traffic, so the app does not need to hold many connections itself. The two layers divide the responsibility cleanly: Payload manages a small stable pool of client connections to PgBouncer, and PgBouncer manages a bounded set of backend PostgreSQL connections on behalf of all processes.
How to verify PgBouncer is actually running the right mode
One thing that caught me during debugging: editing the config file is not enough if the container has not picked it up. Always verify the running configuration directly.
Connect to PgBouncer's admin console:
psql -p 6432 -U pgbouncer pgbouncer
Then run:
SHOW POOLS;
This shows each pool with its mode, active clients, waiting clients, and server connections. If you see clients in the cl_waiting column growing during load, the pool is saturated — either because session mode is still active, the backend pool is too small, or the config change never reached the running process.
To confirm the current mode from inside the running container:
docker exec -it pgbouncer_container cat /etc/pgbouncer/pgbouncer.ini | grep pool_mode
After switching to transaction mode and restarting, SHOW POOLS; should show zero waiting clients under normal load, and the site should hold up under concurrent sessions that previously caused exhaustion.
PostgreSQL settings that complement PgBouncer
With PgBouncer in front, you do not need a huge max_connections on PostgreSQL. Set it to something moderate that covers your PgBouncer backend pool size plus some headroom for direct superuser connections:
max_connections = 50
Two additional settings help clean up sessions that should not have lingered:
idle_in_transaction_session_timeout = 30s
idle_session_timeout = 10min
idle_in_transaction_session_timeout terminates transactions that were left open without executing any queries. This catches app bugs where a transaction was started but never committed or rolled back. idle_session_timeout reclaims sessions sitting completely idle. These are defensive settings — they keep Postgres from accumulating dead weight that eats into your connection ceiling.
FAQ
Does Payload CMS work with PgBouncer transaction mode by default?
Generally yes. Payload's database interactions through @payloadcms/db-postgres and Drizzle are transaction-oriented and work correctly in PgBouncer transaction mode. The main risk area is if you use any PostgreSQL session-level features directly — advisory locks, persistent prepared statements, or LISTEN/NOTIFY — which are incompatible with transaction mode. Standard Payload usage does not rely on these.
Why did my self-hosted setup work fine for months before breaking? Connection exhaustion is often traffic-correlated. A single process under low load might never exceed its pool size. The ceiling appears when you add a second process, a deploy that runs migrations, admin traffic running alongside API traffic, or any combination that pushes total connections above what Postgres will accept.
Should I use statement mode instead of transaction mode? Statement mode is significantly more restrictive — each individual SQL statement gets a backend connection, which breaks multi-statement transactions. It is unsuitable for most application use cases including Payload. Transaction mode is the correct choice for web application workloads.
Can I increase max_connections as a short-term fix while setting up PgBouncer?
Yes. Raising max_connections buys time by expanding the ceiling. Each additional connection also consumes memory on the PostgreSQL side (roughly 5–10 MB per connection depending on settings), so this is not a permanent solution. Use it as a stopgap while putting the proper connection brokering layer in place.
What if I need LISTEN/NOTIFY or advisory locks alongside Payload?
You would need a separate direct connection to PostgreSQL for that functionality, bypassing PgBouncer, while routing all standard Payload queries through PgBouncer in transaction mode. This is a valid architecture — two connection strings, two different purposes.
Conclusion
The remaining connection slots are reserved for roles with the SUPERUSER attribute error in Payload CMS is a connection management problem, and the root cause on self-hosted setups is usually the absence of a proper connection broker. App-side pooling through node-postgres keeps per-process connection counts sane. It cannot control what happens across multiple processes, containers, or concurrent traffic patterns.
PgBouncer in front of PostgreSQL solves that at the system level. The important detail is the mode: session mode pins backend connections for the full lifetime of a client session, which exhausts the pool quickly under web traffic. Transaction mode releases backend connections back to the pool after each transaction, letting a small number of PostgreSQL connections serve many more concurrent clients.
That mode switch was the actual fix. Everything else — pool size tuning, max_connections adjustments, connection timeouts — supported it, but transaction mode was where the behavior changed.
If you are moving from a managed PostgreSQL provider to self-hosted Postgres and this error appears, the first thing to add to your stack is PgBouncer. The second thing is to make sure it is running in the right mode.
Let me know in the comments if you run into a variation of this setup or have questions about the configuration. Subscribe for more practical Payload CMS and PostgreSQL guides.
Thanks, Matija
Frequently Asked Questions
Comments
No comments yet
Be the first to share your thoughts on this post!