Introducing pgdock: A Modern CLI for PostgreSQL Docker Management
Transform PostgreSQL Docker management from manual chore to single command

I've been working with PostgreSQL in Docker containers for years, and I kept running into the same frustrating workflow. Every time I needed a database for development or testing, I'd end up writing yet another Docker Compose file, manually generating credentials, figuring out port conflicts, and repeating the same setup steps over and over.
The breaking point came last month when I was setting up databases for a client project. I needed multiple PostgreSQL instances running different versions, each with isolated data and unique credentials. What should have been a five-minute task turned into an hour of YAML editing, port juggling, and credential management. That's when I decided to build something better.
Today, I'm excited to announce pgdock - a CLI tool that transforms PostgreSQL Docker management from a manual chore into a single command experience.
The Problem with Traditional PostgreSQL Docker Setup
Most developers handle PostgreSQL in Docker the same way: create a docker-compose.yml file, define environment variables, manually pick ports, generate passwords, and hope everything works together. This approach has several pain points that compound over time.
Port management becomes a nightmare when running multiple instances. You end up with a mental map of which ports are taken, or worse, containers that fail to start because of conflicts. Credential management is equally problematic - you're either reusing the same weak passwords across projects or spending time generating secure ones manually.
The bigger issue is that this manual approach doesn't scale. Each new project means recreating the same infrastructure setup, and each developer on a team needs to replicate the same environment configuration. There's no consistency, no automation, and plenty of room for configuration drift.
A Better Approach to Database Management
pgdock eliminates these problems by treating PostgreSQL instances as managed resources rather than manual configurations. Instead of writing Docker Compose files, you describe what you want and let the tool handle the implementation details.
The core philosophy is simple: database instances should be as easy to create and manage as files. You shouldn't need to think about ports, credentials, or container orchestration when you just need a database for your application.
Here's how pgdock transforms the traditional workflow:
# Traditional approach: write docker-compose.yml, set variables, docker compose up
# pgdock approach:
pgdock create --name myapp-db
# Result: Running PostgreSQL instance with:
# - Auto-generated secure credentials
# - Automatically selected free port
# - Health monitoring enabled
# - ~/.pgpass automatically updated
# - Full backup capability built-in
The tool handles every aspect of the setup process automatically while giving you full control when you need it. Port conflicts become impossible because pgdock automatically finds available ports. Credential security improves because every instance gets unique, randomly generated passwords. Environment consistency emerges naturally because everyone uses the same tool with the same defaults.
Key Capabilities That Set pgdock Apart
pgdock provides a comprehensive set of commands that cover the entire lifecycle of PostgreSQL instances. The create command goes far beyond just starting a container - it generates secure credentials, configures health checks, sets up persistent storage, and integrates with your local PostgreSQL tooling.
The credential management system automatically updates your ~/.pgpass file, enabling seamless connections with standard PostgreSQL tools. You can retrieve credentials in both human-readable and JSON formats, making integration with scripts and applications straightforward.
Instance management becomes intuitive with commands that mirror file operations. You can list all instances to see their status at a glance, get detailed information about specific instances, and start or stop them as needed. The destroy command provides safety controls, requiring confirmation before removing containers and offering separate options for data persistence.
The integrated backup system eliminates the need for separate backup scripts or tools. You can create backups in multiple formats, apply retention policies automatically, and restore data when needed. All backup operations work directly with running instances without requiring manual container access.
Real-World Usage Examples
pgdock shines in development workflows where you need multiple database environments. Setting up separate instances for different feature branches becomes trivial:
pgdock create --name feature-auth --version 15
pgdock create --name feature-payments --version 16
pgdock create --name main-branch --version 16
# Each instance runs independently with isolated data
pgdock list # Shows all instances with their status and ports
For testing scenarios, you can quickly spin up clean database instances, run your tests, and tear them down without affecting other work:
pgdock create --name test-run-$(date +%s)
# Run tests
pgdock destroy --name test-run-* --force # Clean up when done
The backup system integrates naturally into deployment workflows. You can create consistent backups across environments and apply retention policies to manage storage automatically:
pgdock backup production-db ./daily-backups --retention-days 30
Cross-Platform Support and Installation
pgdock works consistently across macOS, Linux, and Windows Subsystem for Linux. The installation process is straightforward through PyPI, and the tool includes built-in troubleshooting for common environment issues.
pip install pgdock
pgdock --help # Verify installation
The tool automatically detects your Docker setup and provides specific guidance when configuration issues arise. On Linux systems, it checks for Docker group membership and provides exact commands to fix permission problems. For WSL users, it includes networking guidance for accessing instances from outside the Linux subsystem.
Each command includes comprehensive help text and error messages designed to guide you toward solutions rather than just reporting problems. The tool validates configurations before attempting operations, catching issues like name conflicts, port problems, or missing dependencies early in the process.
Technical Implementation and Design Decisions
Under the hood, pgdock generates per-instance Docker Compose configurations that follow PostgreSQL best practices. Each instance gets its own isolated directory under ~/.pgdock/instances/, containing both the container configuration and metadata about credentials and settings.
The credential generation system uses cryptographically secure random generation for passwords and follows PostgreSQL username conventions. Port allocation starts from a high range (5400+) to avoid conflicts with common services, and the tool validates port availability before creating instances.
Health monitoring uses PostgreSQL's built-in pg_isready command with configurable timeouts and retry logic. This ensures instances are truly ready to accept connections before reporting success, eliminating race conditions in automated workflows.
The backup system leverages PostgreSQL's native pg_dump tool with support for both SQL and custom binary formats. Retention policies operate safely by only removing files that match the expected naming pattern, preventing accidental deletion of unrelated backups.
Open Source and Community
pgdock is completely open source and available on GitHub at https://github.com/matija2209/pgdock. The project uses modern Python packaging standards, automated testing, and GitHub Actions for continuous integration and PyPI publishing.
I built this tool to solve my own problems, but I believe it addresses challenges that many developers face when working with PostgreSQL in containerized environments. The design prioritizes practical utility over feature completeness, focusing on the workflows that matter most in daily development.
The codebase is structured for contributions, with clear separation between CLI interface, core functionality, and Docker integration. Issues and feature requests are welcome, and I'm particularly interested in feedback about additional PostgreSQL versions, backup formats, or integration possibilities.
Getting Started and Next Steps
If you work with PostgreSQL in Docker containers, pgdock can simplify your workflow significantly. The learning curve is minimal - if you understand docker and psql, you already know enough to be productive with pgdock.
Start by installing the tool and creating your first instance:
pip install pgdock
pgdock create --name my-first-db
pgdock creds my-first-db # Get connection details
From there, explore the other commands based on your needs. The list command shows you all instances at a glance, status provides detailed information about specific instances, and the backup command enables you to create consistent snapshots of your data.
The tool documentation is available in the GitHub repository, and each command includes built-in help accessible through --help flags. I've also written a comprehensive guide to publishing Python packages to PyPI, documenting the process I used to make pgdock available through pip install.
Let me know in the comments if you try pgdock and how it fits into your development workflow. I'm actively working on improvements and would love to hear about use cases I haven't considered.
Thanks, Matija