Migrating Docker Containers Between VPS Servers Without Data Loss

Move a Docker Compose stack, persistent volumes, and configuration to a new host without downtime.

·Matija Žiberna·
Migrating Docker Containers Between VPS Servers Without Data Loss

🐳 Docker & DevOps Implementation Guides

Complete Docker guides with optimization techniques, deployment strategies, and automation prompts to streamline your containerization workflow.

No spam. Unsubscribe anytime.

I was hosting an EasyAppointments booking system for a client when I got the call that we needed to migrate to a new VPS provider. The application had months of appointment data in a MySQL database, and the thought of losing even a single booking was unacceptable. I'd migrated databases before using SQL dumps, but this time I wanted to move the entire Docker Compose stack—application container, database container, all persistent volumes, and configuration files—in one shot without reconstructing anything.

After digging through Docker documentation and testing different approaches on staging servers, I landed on a direct volume migration strategy using tar archives. It turned out to be surprisingly straightforward and completely reliable. This guide walks through the exact process I used to migrate a production Docker Compose application between VPS servers with zero data loss.

Why Volume Migration Works

When you're running a Docker Compose stack in production, your critical data lives in named volumes. In my case, the EasyAppointments application files lived in one volume and the MySQL database lived in another. Most migration guides tell you to export your database separately, transfer it, reimport it, then rebuild your containers on the new host. That approach works, but it introduces multiple failure points and requires you to reconfigure everything.

Direct volume migration treats your entire stack as a single unit. You stop the containers cleanly to ensure data consistency, create tar archives of each volume's contents, transfer those archives to the new server, and extract them into fresh volumes. The beauty of this approach is that Docker volumes are just directories on disk, so a tar archive captures everything: file permissions, ownership, directory structure, and most importantly, all your data exactly as it existed on the source server.

This method also gives you a complete backup as a side effect. If something goes wrong during the migration, you have auditable tar files that you can inspect, and your source server remains untouched until you're confident the migration succeeded.

If you're moving automation workloads like n8n, you can harden the destination server right after the migration with the checklist in How to Self-Host n8n on Your VPS (Simple, Secure, and Production-Ready). If either host is still on the legacy Ubuntu packages, follow How to Upgrade Docker to the Latest Version on Ubuntu before you start.

Setting Up Server-to-Server SSH Access

Before moving any data, you need to establish passwordless SSH access from your source server to your target server. This allows your migration scripts to run without stopping for password prompts, which is essential when you're transferring multiple large files in sequence.

On the source server, generate a new SSH key pair specifically for this migration. I prefer Ed25519 keys because they're fast and secure. Run this command and accept the default location:

# On source server
ssh-keygen -t ed25519 -C "migration-key" -N "" -f ~/.ssh/id_ed25519

This creates two files: ~/.ssh/id_ed25519 (your private key) and ~/.ssh/id_ed25519.pub (your public key). Display the public key so you can copy it:

cat ~/.ssh/id_ed25519.pub

You'll see a line starting with ssh-ed25519 followed by a long string and your comment. Copy the entire line. Now switch to your target server and add this public key to the authorized keys file:

# On target server
mkdir -p ~/.ssh
chmod 700 ~/.ssh
nano ~/.ssh/authorized_keys   # Paste the public key on a new line
chmod 600 ~/.ssh/authorized_keys

The chmod commands set the correct permissions that SSH requires. Without them, SSH will ignore your key for security reasons. To verify the setup works, return to your source server and test the connection:

ssh user@TARGET_IP "echo 'SSH connection works!'"

If you see the echo message without a password prompt, your passwordless authentication is working correctly. This connection will be used throughout the migration to transfer files and execute remote commands on the target server.

Stopping the Source Stack Safely

Data consistency is critical when backing up database volumes. If you create a backup while MySQL is actively writing to disk, you risk capturing the database in an inconsistent state—imagine backing up while a transaction is halfway complete. The safest approach is to stop all containers cleanly before creating your backups.

Navigate to your project directory on the source server and bring down the entire stack:

cd /path/to/your/project
docker compose down --remove-orphans

The docker compose down command stops all running containers and removes them, but crucially, it leaves your named volumes intact. The --remove-orphans flag cleans up any containers from previous configurations that might still be lingering. When this command completes, your application is offline but all your data sits safely in Docker volumes, ready to be archived.

This is the point of no return for your source server. Your application is now down, and it will stay down until you either restart it here or successfully migrate it to the target. In a production environment, this is when you'd put up a maintenance page or switch DNS to a placeholder.

Creating Volume Backups

Docker volumes are stored in a special directory managed by Docker, typically somewhere like /var/lib/docker/volumes/. You can't simply tar that directory because of permission issues and because Docker expects to manage that space itself. Instead, we use a clever workaround: we run a temporary Alpine Linux container that mounts both the volume we want to back up and a directory where we want to save the backup, then we create the tar archive from inside that container.

Create a backup directory in your home folder, then run the backup process for each volume:

mkdir -p ~/docker-backup
docker run --rm -v easyappointments:/source -v ~/docker-backup:/backup alpine tar czf /backup/easyappointments.tar.gz -C /source .
docker run --rm -v mysql:/source -v ~/docker-backup:/backup alpine tar czf /backup/mysql.tar.gz -C /source .

Let's break down what's happening in these commands. The docker run --rm starts a temporary container that will be automatically removed when it exits. The -v easyappointments:/source mounts your easyappointments volume to /source inside the container. The -v ~/docker-backup:/backup mounts your backup directory to /backup inside the container. The container uses the minimal alpine image, which includes the tar utility. Finally, tar czf /backup/easyappointments.tar.gz -C /source . creates a compressed tar archive of everything in the /source directory.

This approach works regardless of the volume's internal permissions or ownership because the Alpine container runs with sufficient privileges to read the volume contents. When the tar command completes, the container exits and is removed, leaving you with compressed archives in your backup directory.

Verify the backups were created successfully:

ls -lh ~/docker-backup/

You should see two tar.gz files with reasonable sizes. If the mysql archive is only a few kilobytes, something went wrong—a MySQL database should be at least several megabytes. If the archives look correct, you're ready to transfer them.

Transferring Files to the Target Server

Now that you have complete archives of your volumes and your project files sitting safely on the source server, it's time to move them to the target. I use rsync for this because it shows progress, handles interrupted transfers gracefully, and preserves file permissions.

First, create the necessary directories on the target server:

ssh user@TARGET_IP "mkdir -p ~/docker-backup ~/project"

Then transfer the volume backups:

rsync -avz --progress ~/docker-backup/ user@TARGET_IP:~/docker-backup/

The -a flag preserves permissions and timestamps, -v gives verbose output, -z compresses data during transfer to save bandwidth, and --progress shows you a progress bar for each file. This is helpful when transferring large MySQL databases over slower connections. Once the backups are transferred, copy the entire project directory:

rsync -avz --progress /path/to/your/project/ user@TARGET_IP:~/project/

Notice the trailing slashes—they tell rsync to copy the contents of the directory rather than the directory itself. When this completes, your target server has identical copies of all your project files and volume backups.

Updating Configuration for the New Environment

Before restoring volumes and starting containers on the target server, you need to update any configuration that references the old server's IP address or hostname. In a typical Docker Compose setup, these values live in environment variables within your compose.yml file or in a separate .env file.

SSH into the target server and navigate to your project directory:

ssh user@TARGET_IP
cd ~/project

Open your compose.yml file and look for any environment variables that contain IP addresses or URLs. In my case, EasyAppointments uses a BASE_URL variable to generate links:

# File: compose.yml
services:
  easyappointments:
    environment:
      - BASE_URL=http://TARGET_IP

Change the BASE_URL from the old IP to the new one. If you're using DNS, this is where you'd update it to point to your domain name. This step is easy to overlook, but if you skip it, your application will generate links pointing back to the old server, confusing users and breaking functionality.

Save the file. Your compose.yml is now configured for the target environment, and you're ready to restore the actual data.

Restoring Volumes on the Target

With your configuration updated and backup archives transferred, you can now recreate your Docker volumes on the target server and populate them with data from the source. This is the inverse of the backup process: create empty volumes, then use a temporary Alpine container to extract the tar archives into those volumes.

Create the volumes first. Docker will initialize them as empty named volumes:

docker volume create easyappointments
docker volume create mysql

Now extract each backup archive into its corresponding volume:

docker run --rm -v easyappointments:/target -v ~/docker-backup:/backup alpine sh -c 'cd /target && tar xzf /backup/easyappointments.tar.gz'
docker run --rm -v mysql:/target -v ~/docker-backup:/backup alpine sh -c 'cd /target && tar xzf /backup/mysql.tar.gz'

These commands mount the newly created volumes to /target and the backup directory to /backup, then extract the compressed archives directly into the volume. The -C /target approach from the backup becomes a cd /target here because we're running a shell command with sh -c. When the extraction completes, your volumes contain exact copies of the data from your source server, down to the file permissions and ownership.

At this point, your target server has your application code, your Docker volumes with all data restored, and your configuration pointing to the correct environment. Everything is in place to start the stack.

Starting the Stack on the Target

From the project directory on the target server, bring up your Docker Compose stack:

cd ~/project
docker compose up -d

The -d flag runs the containers in detached mode, meaning they run in the background. Docker Compose reads your compose.yml file, pulls any images that aren't already cached, and starts each service in the correct order based on dependencies. Because your volumes already contain data, MySQL will detect an existing database and use it rather than initializing a new one, and EasyAppointments will see its existing configuration and file uploads.

Check that containers are running:

docker compose ps

You should see both containers in an "Up" state. If any container shows "Restarting" or "Exited," check the logs:

docker compose logs mysql
docker compose logs easyappointments

Common issues at this stage include MySQL version mismatches between source and target, incorrect volume mounts in your compose.yml, or missing environment variables. The logs will tell you exactly what went wrong. Assuming everything started cleanly, your application is now running on the target server with all its original data intact.

Verifying the Migration

With containers running, open a browser and navigate to your new server's IP address. You should see your application's login page or homepage. Log in with your existing credentials—remember, this is the same database you were using before, so all users, passwords, and data should be identical.

Create a test record if your application supports it. In EasyAppointments, I created a new appointment to verify the database was writable and the application could interact with MySQL correctly. Then I checked existing records to confirm data integrity. Everything should work exactly as it did on the old server.

If you want to verify the database at a lower level, you can connect directly to MySQL inside its container:

docker exec -it $(docker ps -qf name=mysql) mysql -uroot -p -e "SHOW DATABASES;"

Enter your MySQL root password when prompted. You should see your application database listed alongside the default MySQL system databases. If you see your database and can query tables, the migration was successful.

Once you're confident everything works, you can clean up the backup files on both servers to reclaim disk space. On the target server:

rm -rf ~/docker-backup

Keep the backups on the source server for a few days until you're absolutely certain you won't need to roll back.

Automating the Entire Process

After running through these steps manually, I scripted the entire workflow so I could migrate other projects faster. The script runs on the source server and orchestrates every step: stopping containers, creating backups, transferring files, restoring volumes on the target, and starting the stack.

# File: migrate-docker-stack.sh
# Run this script on the SOURCE server
#!/bin/bash
# ============================================
# EasyAppointments Full Migration Script
# Run this script directly on the SOURCE server
# ============================================

set -e  # Exit on any error

# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'

# Configuration
TARGET_SSH="user@TARGET_IP"
TARGET_IP="YOUR_TARGET_IP"
PROJECT_DIR="/path/to/your/project"
COMPOSE_FILE="compose.yml"
BACKUP_DIR="docker-backup"
TEMP_DIR="/tmp/migration-$$"

echo -e "${GREEN}================================${NC}"
echo -e "${GREEN}EasyAppointments Migration Script${NC}"
echo -e "${GREEN}================================${NC}"
echo ""
echo -e "Running from SOURCE → Target: ${YELLOW}$TARGET_SSH${NC}"
echo ""

read -p "This will stop containers and back up data on this (source) server. Continue? (yes/no): " confirm
if [[ "$confirm" != "yes" ]]; then
  echo "Migration cancelled."
  exit 0
fi

# Step 1: Stop containers
echo -e "\n${GREEN}>>> Step 1: Stopping containers on source${NC}\n"
cd "$PROJECT_DIR"
docker compose down --remove-orphans || { echo -e "${RED}Failed to stop containers${NC}"; exit 1; }

# Step 2: Create backups
echo -e "\n${GREEN}>>> Step 2: Creating Docker volume backups${NC}\n"
mkdir -p ~/$BACKUP_DIR
docker run --rm -v easyappointments:/source -v ~/$BACKUP_DIR:/backup alpine tar czf /backup/easyappointments.tar.gz -C /source .
docker run --rm -v mysql:/source -v ~/$BACKUP_DIR:/backup alpine tar czf /backup/mysql.tar.gz -C /source .
echo -e "${GREEN}Backups created successfully${NC}"

# Step 3: Prepare temp directory
echo -e "\n${GREEN}>>> Step 3: Preparing local temp directory${NC}\n"
mkdir -p "$TEMP_DIR"
cp "$PROJECT_DIR/$COMPOSE_FILE" "$TEMP_DIR/"
cp ~/$BACKUP_DIR/*.tar.gz "$TEMP_DIR/"

# Step 4: Update compose.yml
echo -e "\n${GREEN}>>> Step 4: Updating compose.yml for target${NC}\n"
sed -i.bak "s|BASE_URL=http://.*|BASE_URL=http://$TARGET_IP|g" "$TEMP_DIR/$COMPOSE_FILE"

# Step 5: Transfer to target
echo -e "\n${GREEN}>>> Step 5: Uploading project and backups to target${NC}\n"
ssh "$TARGET_SSH" "mkdir -p ~/$BACKUP_DIR ~/project"
echo "Uploading backups..."
scp "$TEMP_DIR"/*.tar.gz "$TARGET_SSH:~/$BACKUP_DIR/"
echo "Uploading full project directory..."
rsync -av --progress "$PROJECT_DIR/" "$TARGET_SSH:~/project/" --exclude "$BACKUP_DIR"

# Step 6: Restore on target
echo -e "\n${GREEN}>>> Step 6: Restoring Docker volumes on target${NC}\n"
ssh "$TARGET_SSH" << EOF
set -e
docker volume create easyappointments
docker volume create mysql
docker run --rm -v easyappointments:/target -v ~/$BACKUP_DIR:/backup alpine sh -c 'cd /target && tar xzf /backup/easyappointments.tar.gz'
docker run --rm -v mysql:/target -v ~/$BACKUP_DIR:/backup alpine sh -c 'cd /target && tar xzf /backup/mysql.tar.gz'
cd ~/project
docker compose up -d
EOF

# Step 7: Verify
echo -e "\n${GREEN}>>> Step 7: Checking container status on target${NC}\n"
ssh "$TARGET_SSH" "docker ps --filter 'name=easyappointments' --filter 'name=mysql' --format 'table {{.Names}}\t{{.Status}}'"

# Step 8: Cleanup
echo -e "\n${GREEN}>>> Step 8: Cleanup${NC}\n"
rm -rf "$TEMP_DIR"
read -p "Clean up local backups (~/$BACKUP_DIR)? (yes/no): " cleanup
if [[ "$cleanup" == "yes" ]]; then
  rm -rf ~/$BACKUP_DIR
  echo "Local backups cleaned up."
fi

echo -e "\n${GREEN}================================${NC}"
echo -e "${GREEN}Migration Complete!${NC}"
echo -e "${GREEN}================================${NC}\n"
echo -e "Your application should now be running at: ${YELLOW}http://$TARGET_IP${NC}\n"
echo "Next steps:"
echo "  1. Test in browser"
echo "  2. Verify DB data and app login"
echo "  3. Test creating new appointments"
echo ""

This script uses a few bash features worth understanding. The set -e at the top tells bash to exit immediately if any command fails, which prevents the script from continuing after an error. The $$ in the temp directory path is the script's process ID, which ensures the temp directory name is unique if you run multiple migrations simultaneously. The heredoc syntax (<< EOF) lets you execute multiple commands on the remote server in one SSH session.

Save this as migrate-docker-stack.sh, make it executable with chmod +x migrate-docker-stack.sh, and run it with ./migrate-docker-stack.sh. The script will prompt you for confirmation before stopping containers, then walk through each step automatically. If anything fails, the set -e ensures it stops immediately so you can investigate.

Handling Common Migration Issues

During testing and production migrations, I ran into a few issues that are worth knowing about upfront. The first time I ran the migration script, it hung at the docker compose down step on the source server. The problem was that SSH sessions don't allocate a TTY by default, and some Docker commands expect one. The fix is to add the -t flag to SSH commands that run Docker:

ssh -t user@TARGET_IP "cd ~/project && docker compose down --remove-orphans"

Another issue I encountered was Docker commands not being found when running over SSH. Non-interactive SSH sessions use a minimal PATH that doesn't include /usr/local/bin, where Docker is often installed. You can work around this by explicitly setting the PATH in your SSH command:

ssh user@TARGET_IP "export PATH=/usr/local/bin:/usr/bin:/bin && docker compose down"

After restoring the MySQL volume, the database container occasionally fails to start due to permission issues. This happens if the MySQL user ID inside the container doesn't match the ownership of the restored files. Docker volumes usually maintain their original permissions, but if you see permission errors in the MySQL logs, you can fix them by adjusting ownership inside the container:

docker exec -it mysql chown -R mysql:mysql /var/lib/mysql

Finally, I once migrated a stack and couldn't figure out why the application kept generating URLs pointing to the old server. I had updated the BASE_URL in compose.yml, but I forgot to restart the containers after editing the file. Always run docker compose down && docker compose up -d after changing environment variables to ensure containers pick up the new values.

If you'd rather keep the new host completely closed off, route traffic through Cloudflare Tunnel as described in Expose Any Docker Container with Cloudflare Tunnel (No Nginx, No Open Ports) instead of opening ports directly.

Wrapping Up

Migrating a Docker Compose stack between VPS servers comes down to treating your volumes as the source of truth. By stopping containers cleanly, archiving volumes with tar, transferring those archives over SSH, and restoring them on the target server, you can move an entire application stack without losing data or spending hours reconfiguring services. The approach works whether you're migrating a simple single-container app or a complex multi-service stack with databases, caches, and persistent file storage.

After using this process on several production migrations, I've come to appreciate how auditable it is. If something goes wrong, you have tar archives you can inspect and a source server that remains untouched until you're confident the target is working. That safety net makes the entire process less stressful.

If you're planning a VPS migration and want to move your entire Docker setup intact, this approach will save you hours of debugging and give you confidence that your data made it across correctly. Let me know in the comments if you have questions, and subscribe for more practical development guides.

Thanks, Matija

0

Comments

Leave a Comment

Your email will not be published

10-2000 characters

• Comments are automatically approved and will appear immediately

• Your name and email will be saved for future comments

• Be respectful and constructive in your feedback

• No spam, self-promotion, or off-topic content

Matija Žiberna
Matija Žiberna
Full-stack developer, co-founder

I'm Matija Žiberna, a self-taught full-stack developer and co-founder passionate about building products, writing clean code, and figuring out how to turn ideas into businesses. I write about web development with Next.js, lessons from entrepreneurship, and the journey of learning by doing. My goal is to provide value through code—whether it's through tools, content, or real-world software.