- Garage Object Storage on a VPS: 10-Minute Setup Guide
Garage Object Storage on a VPS: 10-Minute Setup Guide
Step-by-step S3-compatible Garage setup on Ubuntu VPS using Docker Compose and Payload CMS for lightweight self-hosted…

🐳 Docker & DevOps Implementation Guides
Complete Docker guides with optimization techniques, deployment strategies, and automation prompts to streamline your containerization workflow.
How to Set Up Garage Object Storage on a VPS for Payload CMS
If you need self-hosted S3-compatible object storage for a Payload CMS project and want a lightweight MinIO alternative, Garage is worth your time. This guide walks through a full Garage setup on an Ubuntu VPS using Docker Compose — single node, persisted data, S3 API on port 3900, and a working bucket wired into Payload's @payloadcms/storage-s3 plugin. By the end you will have a running object store, generated credentials, and a smoke-tested connection from your Payload app.
I set this up for a client project where MinIO felt like significant overhead for a single-server deployment. Garage is a distributed object storage system designed to run on commodity hardware, and its single-node mode handles S3 compatibility on a VPS without requiring a full cluster to be stood up alongside it.
I have been building headless architectures with Payload CMS and Next.js for several years — client projects, internal tools, and open source work documented here on buildwithmatija.com. This guide captures the exact deployment I ran and tested.
What You Need Before You Start
Your VPS needs Docker, Docker Compose v2, git, and openssl. Verify all four before continuing:
docker --version docker compose version git --version openssl version
This guide assumes Ubuntu. If Docker is not yet installed, get that sorted first — everything below depends on it.
Why Garage Over MinIO for a VPS Setup
Developers searching for a MinIO alternative often end up at Garage for two reasons: resource footprint and simplicity at small scale.
| Factor | Garage | MinIO |
|---|---|---|
| Designed for | Distributed, commodity hardware | High-throughput, enterprise S3 |
| Single-node setup | First-class supported | Possible but not the primary target |
| Open source license | AGPL-3.0 | AGPL-3.0 (MinIO Object Store) |
| Commercial offering | None (self-host freely) | AIStor — separate commercial binary with enterprise features |
| Config surface area | Minimal TOML file | More extensive, more knobs |
| S3 API compatibility | Broad compatibility, forcePathStyle required | Very broad |
| Resource usage | Light | Heavier at idle |
Both Garage and MinIO's core object store are AGPL-3.0 licensed. MinIO also ships AIStor as a commercial enterprise product with additional features; if your compliance or support requirements push you toward a paid offering, that distinction matters. For a self-hosted single-VPS Payload deployment, Garage's open source version is sufficient and straightforward to run.
Create the Project Folder
Create a dedicated folder for the Garage deployment. Keeping it separate from your application repo makes backups and key rotation easier to reason about.
cd /root
mkdir -p payload-garage-storage
cd payload-garage-storage
git init
mkdir -p ops/garage/config
mkdir -p ops/garage/data/meta
mkdir -p ops/garage/data/data
mkdir -p bin
The expected directory structure after all files are in place:
payload-garage-storage/ ├── .gitignore ├── compose.yaml ├── bin/ │ ├── garage │ └── garage-init └── ops/ └── garage/ ├── config/ │ ├── garage.toml │ └── garage.toml.example └── data/ ├── data/ └── meta/
The ops/garage/data/ tree is where Garage persists object data and metadata. Treat it as the only thing you need to back up once the system is running.
The Docker Compose Configuration
# File: compose.yaml
services:
garage:
image: dxflrs/garage:v2.2.0
container_name: garage
restart: unless-stopped
entrypoint: ["/garage"]
command: ["server"]
ports:
- "3900:3900"
- "127.0.0.1:3903:3903"
volumes:
- ./ops/garage/config/garage.toml:/etc/garage.toml:ro
- ./ops/garage/data/meta:/var/lib/garage/meta
- ./ops/garage/data/data:/var/lib/garage/data
healthcheck:
test: ["CMD", "/garage", "-c", "/etc/garage.toml", "status"]
interval: 30s
timeout: 10s
retries: 5
start_period: 20s
Port 3900 is the S3 API — that is the one your Payload app talks to. Port 3903 is the admin API, and notice it is bound to 127.0.0.1 only. The admin API stays off the public internet. The image is pinned to v2.2.0 so your setup does not drift unexpectedly during a container rebuild.
Generate the Garage Config
This project uses a helper script (bin/garage-init) and a template file so that secrets — specifically rpc_secret, admin_token, and metrics_token — are generated on the VPS itself and never committed to git.
chmod +x bin/garage-init bin/garage
./bin/garage-init
That writes the live config to ops/garage/config/garage.toml. Verify the file was created and inspect the first section:
sed -n '1,220p' ops/garage/config/garage.toml
One note: in my testing with dxflrs/garage:v2.2.0, the config required s3_web.root_domain = "localhost" to be present even though this setup uses path-style S3 access and not website hosting mode. This is a version-specific workaround — if the Garage container refuses to start and the error references the [s3_web] section, adding that line resolves it. Check the Garage configuration reference for your version if the behavior differs.
Start Garage and Verify
docker compose up -d docker compose ps docker compose logs -f garage
You want to see the S3 API binding on 3900, the admin API binding on 3903, and the container health check turning green. Give it about 20 seconds on first start before the health check passes.
Get the Node ID and Assign a Layout
Garage requires you to assign your node to a layout before it will accept any writes. Run:
./bin/garage status
The output will show a healthy node with NO ROLE ASSIGNED. Copy the full node ID from that output — it looks something like 42f8a93ce1482fc9.
Then apply the layout:
./bin/garage layout assign -z dc1 -c 20G 42f8a93ce1482fc9 ./bin/garage layout apply --version 1 ./bin/garage status
The -z dc1 flag assigns a zone name. For a single-node setup the name does not matter — pick anything consistent. The -c 20G flag is a capacity hint, not a hard partition. After applying, ./bin/garage status should show the zone, capacity, and version all populated.
Create the Bucket and Key
Create the uploads bucket for Payload:
./bin/garage bucket create payload-media ./bin/garage bucket info payload-media
Then create a key scoped to your Payload application and grant it read, write, and owner permissions on the bucket:
./bin/garage key create payload-app
./bin/garage bucket allow --read --write --owner payload-media --key payload-app
./bin/garage key info payload-app
The ./bin/garage key info payload-app command prints the Key ID and Secret key you will need for the Payload environment variables. Copy them now. Also verify the bucket shows RWO (read, write, owner) when you run:
./bin/garage bucket info payload-media
Smoke Test the S3 API
With the AWS CLI available on the server:
export AWS_ACCESS_KEY_ID='<your-key-id>'
export AWS_SECRET_ACCESS_KEY='<your-secret-key>'
export AWS_DEFAULT_REGION='garage'
export AWS_ENDPOINT_URL='http://127.0.0.1:3900'
aws s3 ls
aws s3 ls s3://payload-media
echo 'garage-ok' > /tmp/garage-test.txt
aws s3 cp /tmp/garage-test.txt s3://payload-media/garage-test.txt
aws s3 cp s3://payload-media/garage-test.txt /tmp/garage-test-downloaded.txt
cat /tmp/garage-test-downloaded.txt
You want to see garage-ok come back from the downloaded file. If the AWS CLI is not installed, run the same commands via the containerized version:
docker run --rm --network host \
-e AWS_ACCESS_KEY_ID='<your-key-id>' \
-e AWS_SECRET_ACCESS_KEY='<your-secret-key>' \
-e AWS_DEFAULT_REGION='garage' \
-e AWS_ENDPOINT_URL='http://127.0.0.1:3900' \
amazon/aws-cli s3 ls s3://payload-media
Test a Real Upload and Generate a Presigned URL
Once the basic test passes, upload a real image and generate a presigned URL to confirm public access works the way Payload will use it:
curl -L \
'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a0/WPA-Rumor-Poster.jpg/960px-WPA-Rumor-Poster.jpg' \
-o /tmp/WPA-Rumor-Poster.jpg
aws s3 cp /tmp/WPA-Rumor-Poster.jpg s3://payload-media/wpa-rumor-poster.jpg
aws s3api head-object \
--bucket payload-media \
--key wpa-rumor-poster.jpg
# Replace with your server's actual public IP
aws --endpoint-url http://<YOUR_SERVER_IP>:3900 \
s3 presign s3://payload-media/wpa-rumor-poster.jpg \
--expires-in 604800
The presigned URL is time-limited and public. Anyone with the full URL can fetch the file until it expires. This is useful for manual testing, and it is also the mechanism behind Payload's optional signedDownloads feature — but it is not how Payload serves files by default.
Connect Payload CMS
Set these environment variables in your Payload app:
# File: .env S3_BUCKET=payload-media S3_REGION=garage S3_ENDPOINT=http://127.0.0.1:3900 S3_ACCESS_KEY_ID=<your-key-id> S3_SECRET_ACCESS_KEY=<your-secret-key>
Then configure the storage plugin:
// File: payload.config.ts
import { s3Storage } from "@payloadcms/storage-s3";
plugins: [
s3Storage({
collections: {
media: true,
},
bucket: process.env.S3_BUCKET!,
config: {
endpoint: process.env.S3_ENDPOINT!,
forcePathStyle: true,
region: process.env.S3_REGION!,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
},
},
}),
],
The forcePathStyle: true flag is required. Garage does not support virtual-hosted-style S3 addressing in this configuration — path-style means the bucket name is part of the URL path (http://host:3900/payload-media/object) rather than the subdomain. Leave out forcePathStyle and uploads will fail silently or with a confusing endpoint error.
By default, @payloadcms/storage-s3 does not expose your S3 URLs directly to the browser. Instead, Payload proxies file downloads through its own access control layer — files are still reached via the standard /collection-slug/file/filename path, and Payload's read access control runs on every request. This is the safe default for most deployments. If you want Payload to serve files directly from Garage using time-limited presigned URLs, enable the optional signedDownloads feature on a per-collection basis. The Payload storage adapter docs cover both modes.
Where the Data Lives
ops/garage/data/meta/ → Garage metadata (bucket configs, key registry) ops/garage/data/data/ → Actual object data ops/garage/config/ → garage.toml (generated, not committed)
Back up the entire ops/garage/data/ tree. The config can be regenerated from the template, but the object data cannot.
Stop and Start Garage
# Stop
docker compose down
# Start
docker compose up -d
# Check
docker compose ps
./bin/garage status
FAQ
Does Garage support multi-node clusters if I want to scale later?
Yes — Garage was designed for distributed deployments across multiple nodes, even in different physical locations. The single-node setup here assigns your node to zone dc1 with a capacity hint. Adding more nodes means joining them to the cluster, updating the layout, and reapplying with an incremented version number. Your existing data stays intact during expansion.
Do I need to expose port 3900 publicly for Payload to reach it?
Only if your Payload app runs on a different server than Garage. In that case, expose 3900 and put a reverse proxy with TLS in front of it. If Payload and Garage share the same VPS, keep the endpoint as http://127.0.0.1:3900 and skip the public exposure entirely.
Why does Garage require forcePathStyle: true when MinIO does not always need it?
Garage in this configuration does not support virtual-hosted-style bucket addressing (where the bucket name becomes a subdomain). Path-style puts the bucket name in the URL path instead, which Garage handles correctly. MinIO also supports path-style — setting forcePathStyle: true works there too.
What happens if I recreate the key with ./bin/garage key create?
A new key generates a new Key ID and Secret key. Your existing bucket permissions are tied to the old key, so you will need to re-run ./bin/garage bucket allow for the new key and update your Payload environment variables.
Is this production-ready? Honestly — with caveats. Garage's own documentation explicitly states that single-node deployments should not be used in production because they provide no redundancy. The configuration reference goes further: replication factor 1 is flagged as suitable only for test deployments. That is the upstream project's position and you should know it before making a decision.
That said, plenty of small Payload CMS deployments run on single-server setups with managed backups, and they work fine in practice — provided you accept what you are trading away. The things you need in place before calling this acceptable for production: a scheduled backup of ops/garage/data/ to a separate location, a tested restore procedure you have actually run, monitoring on the container health check, and a clear understanding that if the VPS disk fails you lose everything since the last backup. If those conditions are met and your data tolerance matches the risk, this setup can hold a real workload. If you need zero-downtime guarantees or true redundancy, plan for a multi-node Garage cluster — Garage's documentation recommends at least three nodes for a production-grade deployment with replication.
Conclusion
You now have a self-hosted Garage object storage node running on your VPS, a dedicated bucket and key for Payload media, and a working integration via @payloadcms/storage-s3. Garage handles S3 API compatibility cleanly, the single-node layout keeps the operational overhead low, and the Docker Compose setup means you can bring it back up with one command after a reboot.
The main things to carry forward: back up ops/garage/data/, always use forcePathStyle: true in Payload, keep the admin port off the public internet, and if you expose port 3900 beyond localhost, put a TLS-terminating reverse proxy (nginx or Caddy) in front of it first. If you have questions or hit a configuration edge case, drop them in the comments — and subscribe for more practical Payload CMS and deployment guides.
Official references used in this guide:
- Garage quick start
- Garage configuration reference
- Garage S3 client configuration
- Garage real-world deployment
- Payload CMS storage adapters
Thanks, Matija
Frequently Asked Questions
Comments
No comments yet
Be the first to share your thoughts on this post!