How to Deploy a Production-Ready React Vite App with Docker, NGINX, and SSL on VPS

Complete Docker + NGINX setup with automatic SSL and API proxying

·Matija Žiberna·
How to Deploy a Production-Ready React Vite App with Docker, NGINX, and SSL on VPS

I was building a client application using React and Vite when I hit a wall trying to deploy it to production on a VPS. The development setup worked perfectly, but getting everything running in production with proper SSL, Docker containers, and NGINX proved much more challenging than expected. After spending days figuring out the Docker networking, SSL certificate mounting, and NGINX configuration for Vite's specific asset handling, I finally got a rock-solid production setup running.

This guide walks you through the exact implementation I developed - a complete Docker Compose setup that serves your React Vite app through NGINX with automatic SSL certificates and proper API proxying. By the end, you'll have a production-ready deployment that handles SSL termination, static asset serving, and backend API routing all through a single Docker container.

Project Structure and Prerequisites

This implementation assumes you have a React Vite application created in a frontend/ folder within your project root. The Docker Compose configuration sits in the root directory, allowing you to easily add additional services like backend APIs, databases, or other microservices to your stack.

project-root/
├── frontend/              # Your React Vite application
│   ├── src/
│   ├── package.json
│   ├── pnpm-lock.yaml
│   ├── vite.config.ts
│   ├── Dockerfile
│   └── nginx.conf
├── compose.yml           # Docker Compose orchestration
└── backend/              # Optional: Your backend services

This folder structure provides clean separation of concerns while enabling easy service expansion as your application grows.

The Challenge with React Vite Production Deployments

React applications built with Vite have specific requirements in production that differ from traditional React builds. Vite generates assets with unique hashing, uses ES modules by default, and requires careful NGINX configuration to serve everything correctly. When you add Docker containers, SSL certificates, and API proxying to the mix, the complexity grows significantly.

Most tutorials cover either Docker basics or NGINX configuration in isolation, but rarely show how to orchestrate everything together for a real production environment. This implementation solves that gap by providing a complete, tested setup that handles all the moving pieces.

Docker Multi-Stage Build Strategy

The foundation of this deployment is a multi-stage Docker build that separates the build environment from the production runtime. This approach keeps the final image small while ensuring all dependencies are properly handled.

# File: frontend/Dockerfile
FROM node:22-slim AS builder
WORKDIR /app
RUN npm install -g pnpm
COPY package*.json ./
COPY pnpm-lock.yaml ./
RUN pnpm install
COPY . .
ARG VITE_API_URL
ENV VITE_API_URL=$VITE_API_URL
RUN pnpm run build

FROM nginx:1.25-alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
RUN chmod -R 755 /usr/share/nginx/html

This Dockerfile uses two distinct stages. The builder stage creates a full Node.js environment using the latest LTS version (Node 22), installs pnpm globally for faster dependency management, and runs the Vite build process. The production stage takes only the compiled static files and serves them through NGINX. The node:22-slim and nginx:1.25-alpine images provide security benefits through minimal attack surfaces while maintaining all necessary functionality.

The build argument VITE_API_URL allows you to configure API endpoints at build time, which is crucial for Vite's static compilation process. The final chmod command ensures proper file permissions for the web server to serve all assets correctly.

Vite Configuration for Production

Vite requires specific configuration adjustments to work properly in a Dockerized production environment. The key challenge is ensuring asset paths resolve correctly when served through NGINX.

// File: frontend/vite.config.ts
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import path from "path"

export default defineConfig({
  base: './',
  server:{
    proxy:{
      "/api":{
        target: "http://localhost:8000",
        changeOrigin: true,
        headers:{
          "X-SECRET-KEY": "378bb202bf0cee67ef9b5af7703ad36305aeccdb3850f1a017c6424da16f56f3",
          "X-IS-DEV": "true"
        },
        rewrite: (path) => path.replace(/^\/api/, ""),
      }
    }
  },
  plugins: [react()],
  build: {
    rollupOptions: {},
    commonjsOptions: {
      include: [/node_modules/],
    },
  },
  resolve: {
    alias: {
      "@": path.resolve(__dirname, "./src"),
    },
    dedupe: ['react', 'react-dom', 'prop-types'],
  },
  optimizeDeps: {
    include: ['react', 'react-dom', 'prop-types'],
  },
})

The base: './' setting is crucial for production deployments because it ensures all asset references use relative paths. This prevents issues when your app is served from different contexts or behind reverse proxies. The development proxy configuration shows how API calls are handled locally, including the optional X-SECRET-KEY header for backend authentication when needed. In production, NGINX takes over this proxy responsibility.

The alias configuration with "@": path.resolve(__dirname, "./src") allows clean import paths throughout your application, while the dedupe settings optimize bundle size by preventing multiple versions of core React libraries.

NGINX Production Configuration

The NGINX configuration handles multiple critical functions: SSL termination, static asset serving, API proxying, and single-page application routing. This configuration file replaces the default NGINX setup and provides enterprise-level capabilities.

# File: frontend/nginx.conf
server {
    listen 80;
    listen [::]:80;
    server_name your-domain.com www.your-domain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name your-domain.com www.your-domain.com;

    ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384';
    ssl_prefer_server_ciphers off;

    root /usr/share/nginx/html;
    index index.html;

    include /etc/nginx/mime.types;
    
    types {
        application/javascript mjs js;
    }

    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml;

The first server block handles HTTP to HTTPS redirection, ensuring all traffic uses encrypted connections. The SSL certificate paths reference Let's Encrypt certificates - the complete process for setting up automatic SSL certificate generation and renewal is covered in detail in my separate guide: How to Set Up Automatic SSL Certificate Renewal with Certbot in Docker Containers.

The SSL configuration uses modern TLS protocols and cipher suites for maximum security. The session cache settings optimize SSL handshake performance for returning visitors.

The MIME type configuration is particularly important for Vite applications because it explicitly handles modern JavaScript modules and ensures proper content-type headers. The gzip settings compress static assets to reduce bandwidth usage and improve loading times.

API Proxy and Asset Handling

The most complex part of the NGINX configuration involves routing API requests to your backend while serving static assets efficiently. This setup allows your frontend to make API calls to /api/* paths that get transparently proxied to your backend service.

# File: frontend/nginx.conf (continued)
    # API proxy
    location /api/ {
        rewrite ^/api/(.*)$ /$1 break;
        proxy_pass http://backend:8000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header X-SECRET-KEY "378bb202bf0cee67ef9b5af7703ad36305aeccdb3850f1a017c6424da16f56f3";
        proxy_cache_bypass $http_upgrade;
        proxy_buffering off;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        proxy_connect_timeout 60s;
    }

    # Assets - ONE simple block for all static files
    location /assets/ {
        root /usr/share/nginx/html;
        autoindex off;
        
        add_header Last-Modified $date_gmt;
        add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
        if_modified_since off;
        expires off;
        etag off;
        
        try_files $uri =404;
    }

    # Handle all other routes
    location / {
        try_files $uri $uri/ /index.html;
        add_header Cache-Control "no-cache";
    }

    server_tokens off;
}

The API proxy configuration strips the /api prefix from incoming requests and forwards them to the backend service running on port 8000. The X-SECRET-KEY header is an optional authentication mechanism that allows you to authenticate the frontend with your backend service - useful when you want to ensure only your frontend can access your API endpoints. This provides a simple layer of security for internal service communication, though production applications handling user data might require more sophisticated authentication mechanisms like JWT tokens or OAuth flows.

The asset handling uses a no-cache strategy, which is important during development and deployment phases when you want immediate updates. For high-traffic production sites, you might want to implement proper cache headers based on your deployment frequency.

The final location block implements single-page application routing by falling back to index.html for any route that doesn't match a static file. This allows React Router to handle client-side routing properly.

Docker Compose Orchestration

The Docker Compose configuration ties everything together, managing the container lifecycle, network connectivity, and volume mounting for SSL certificates.

# File: compose.yml
services:
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - "80:80"
      - "443:443"
    environment:
      - SECRET_KEY=378bb202bf0cee67ef9b5af7703ad36305aeccdb3850f1a017c6424da16f56f3
    networks:
      - app-network
    volumes:
      - .:/workspace:cached
      - frontend-node-modules:/workspace/frontend/node_modules
      - /etc/letsencrypt:/etc/letsencrypt:ro
    depends_on:
      - backend

volumes:
  frontend-node-modules:

networks:
  app-network:
    driver: bridge

This configuration builds your frontend container from the local Dockerfile and exposes both HTTP and HTTPS ports. The critical volume mount /etc/letsencrypt:/etc/letsencrypt:ro provides read-only access to SSL certificates generated by Let's Encrypt on the host system.

The frontend-node-modules volume persists Node.js dependencies between container rebuilds, significantly speeding up development and deployment cycles. The app-network creates isolated network communication between your frontend and backend services.

The environment variables and depends_on clauses ensure proper service startup order and configuration passing. In production environments, you'll want to move sensitive values like SECRET_KEY to environment files or secrets management systems.

Security Considerations and Limitations

This setup implements several security best practices but has limitations that you should understand for production use. The SSL configuration uses modern TLS protocols and secure cipher suites. The server_tokens off directive hides NGINX version information from potential attackers. Network isolation through Docker networks prevents unauthorized access between services.

However, the authentication approach using hardcoded API keys in headers is basic and suitable primarily for internal service communication. For production applications handling user authentication, you'll want to implement JWT tokens, OAuth flows, or other enterprise-grade authentication mechanisms.

The no-cache headers on assets prioritize development convenience over performance. High-traffic applications should implement proper cache headers with versioning strategies to reduce server load and improve user experience.

Deployment Process

To deploy this setup on your Ubuntu VPS, follow these steps in order. First, ensure your domain's DNS records point to your VPS IP address and that you have Docker and Docker Compose installed.

If you're starting fresh, create your React Vite application in the frontend folder:

# In your project root
pnpm create vite frontend --template react-ts
cd frontend
pnpm install

Generate your initial SSL certificates using certbot before starting the containers:

sudo certbot certonly --standalone -d your-domain.com -d www.your-domain.com

For automatic certificate renewal, reference the detailed setup process in my comprehensive guide: How to Set Up Automatic SSL Certificate Renewal with Certbot in Docker Containers.

Clone your application repository to the VPS and build the containers from the project root:

git clone your-repository
cd your-project
docker-compose up --build -d

Verify the deployment by checking container logs and accessing your domain through HTTPS. The setup should serve your React application with proper SSL certificates and functional API routing.

Troubleshooting Common Issues

SSL certificate path issues are the most common problem with this setup. Ensure the certificate paths in your NGINX configuration exactly match the paths created by certbot. Container permission issues can prevent NGINX from reading certificates - verify that the certificate files have appropriate read permissions.

Asset loading problems typically stem from incorrect NGINX MIME type handling or path misconfigurations. The types block in the NGINX configuration specifically addresses Vite's JavaScript module requirements. If API calls fail, verify that your backend service is running and accessible through the Docker network on the expected port.

Docker network connectivity issues can prevent the frontend from reaching backend services. Use docker network ls and docker exec commands to debug network connectivity between containers.

Production Deployment Success

This implementation provides a solid foundation for deploying React Vite applications in production environments. You now have a complete Docker-based setup that handles SSL termination, static asset serving, API proxying, and container orchestration through a single configuration.

The multi-stage Docker build optimizes image size while maintaining all necessary functionality. The NGINX configuration properly handles Vite's asset requirements while providing enterprise-level SSL and proxy capabilities. The Docker Compose orchestration ties everything together with proper networking and volume management.

While this setup uses basic authentication suitable for internal services, you can extend it with more sophisticated authentication mechanisms as your application requirements grow. Let me know in the comments if you have questions, and subscribe for more practical development guides.

Thanks, Matija

0

Comments

Enjoyed this article?
Subscribe to my newsletter for more insights and tutorials.
Matija Žiberna
Matija Žiberna
Full-stack developer, co-founder

I'm Matija Žiberna, a self-taught full-stack developer and co-founder passionate about building products, writing clean code, and figuring out how to turn ideas into businesses. I write about web development with Next.js, lessons from entrepreneurship, and the journey of learning by doing. My goal is to provide value through code—whether it's through tools, content, or real-world software.

You might be interested in