Stu Mason
Stu Mason
Guide

Docker for Laravel Developers: The Only Guide You Actually Need

Stuart Mason7 min read

I avoided Docker for years. Laravel Herd (and Valet before it) made local development so painless that containerisation felt like needless complexity. Then I had to deploy to a VPS and everything chan

Docker for Laravel Developers: The Only Guide You Actually Need

I avoided Docker for years. Laravel Herd (and Valet before it) made local development so painless that containerisation felt like needless complexity. Then I had to deploy to a VPS and everything changed.

Here's the thing: Docker isn't really about local development. It's about making deployment predictable. "It works on my machine" becomes "it works on every machine" and suddenly the thing that seemed like overkill makes perfect sense.

Why Bother?

If you're using Herd for local development and deploying to a PaaS like Forge or Ploi, you might not need Docker at all. Those tools manage your server environment for you.

But if you want to self-host (which I'll argue you should in Article 09), Docker gives you:

  • Reproducible environments. Your production server runs the exact same PHP version, extensions, and configuration as your local setup.
  • Easy scaling. Need another instance? Spin up another container.
  • Isolation. Your app doesn't care what else is running on the server.
  • Portable deployment. Move from one VPS to another by pushing a container, not by spending two hours configuring a server.

The Dockerfile

Here's a Dockerfile similar to what I use in production. It's a multi-stage build that handles both PHP and Node assets:

# Stage 1: Build frontend assets
FROM node:22-alpine AS node-builder

WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Install PHP dependencies
FROM composer:2 AS composer-builder

WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-autoloader --prefer-dist

COPY . .
RUN composer dump-autoload --optimize

# Stage 3: Production image
FROM php:8.4-fpm-alpine

# Install system dependencies
RUN apk add --no-cache \
    nginx \
    supervisor \
    postgresql-dev \
    libzip-dev \
    icu-dev \
    linux-headers

# Install PHP extensions
RUN docker-php-ext-install \
    pdo_pgsql \
    zip \
    intl \
    opcache \
    pcntl \
    bcmath

# Install Redis extension
RUN pecl install redis && docker-php-ext-enable redis

# Configure PHP for production
COPY docker/php/php.ini /usr/local/etc/php/php.ini
COPY docker/php/www.conf /usr/local/etc/php-fpm.d/www.conf

# Configure Nginx
COPY docker/nginx/default.conf /etc/nginx/http.d/default.conf

# Configure Supervisor
COPY docker/supervisor/supervisord.conf /etc/supervisord.conf

# Set working directory
WORKDIR /var/www/html

# Copy application code
COPY --from=composer-builder /app/vendor ./vendor
COPY --from=node-builder /app/public/build ./public/build
COPY . .

# Set permissions
RUN chown -R www-data:www-data storage bootstrap/cache

# Optimise Laravel
RUN php artisan config:cache \
    && php artisan route:cache \
    && php artisan view:cache

EXPOSE 80

CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]

Let me explain the stages:

Stage 1 installs Node dependencies and builds your frontend assets (Vite). This container is thrown away — only the compiled assets survive.

Stage 2 installs Composer dependencies. Again, this container is thrown away — only the vendor directory survives.

Stage 3 is the actual production image. It starts from php:8.4-fpm-alpine (Alpine Linux for small image size), installs the PHP extensions you need, copies in the compiled assets and vendor directory from the previous stages, and optimises Laravel.

The multi-stage approach means your production image doesn't contain Node, npm, Composer, or any build tools. It's just PHP-FPM, Nginx, and your application code. Smaller image = faster deployments.

Docker Compose for Local Development

You don't need to use Docker for local development if you're on Herd. I don't. Herd handles PHP, Nginx, and DNS for me locally, and Docker handles the production build.

But if you want a consistent Docker-based local environment (or you're on Linux where Herd doesn't exist):

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile.dev
    volumes:
      - .:/var/www/html
      - /var/www/html/vendor
      - /var/www/html/node_modules
    ports:
      - "8000:80"
    depends_on:
      - postgres
      - redis
    environment:
      - APP_ENV=local
      - DB_CONNECTION=pgsql
      - DB_HOST=postgres
      - DB_PORT=5432
      - DB_DATABASE=app
      - DB_USERNAME=app
      - DB_PASSWORD=secret

  postgres:
    image: postgres:17-alpine
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: app
      POSTGRES_PASSWORD: secret
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Note the volume mount for the app: .:/var/www/html means your local files are reflected in the container immediately. The exclusions for vendor and node_modules prevent your local copies from overriding the container's copies (which may have different platform-specific binaries).

PHP Extensions: The Ones You Actually Need

Every Laravel app needs certain PHP extensions. Here's my standard list:

ExtensionWhy
pdo_pgsql (or pdo_mysql)Database
zipComposer, file handling
intlInternationalisation, currency formatting
opcachePerformance (massive difference in production)
pcntlHorizon, queue workers
bcmathFinancial calculations
redisCache, sessions, queues

You probably don't need: gd (unless you process images), imagick (same), soap (unless you're integrating with enterprise APIs from 2005), xdebug (never in production, only in a dev Dockerfile).

Environment Variables

Don't bake environment variables into your Docker image. Ever. Your image should be environment-agnostic.

# WRONG - don't do this
ENV APP_KEY=base64:your-key-here
ENV DB_PASSWORD=production-password

# RIGHT - set these at runtime
# (via docker run -e, docker-compose.yml, or your deployment tool)

In Coolify, you set environment variables through the UI. They're injected at runtime, not build time. This means the same Docker image can run in staging and production with different configurations.

Deploying with Coolify

Here's the actual deployment workflow I use:

  1. Push to main branch
  2. Coolify detects the push (via webhook)
  3. Coolify builds the Docker image using the Dockerfile in the repository
  4. Coolify runs the new container
  5. Coolify switches traffic to the new container
  6. Old container is removed

Zero downtime. No SSH. No manually running commands on a server.

The Coolify configuration lives in the Coolify UI, not in the repository. You set:

  • The Git repository URL
  • The branch to deploy from
  • The Dockerfile path
  • Environment variables
  • Domain name and SSL configuration
  • Resource limits (CPU, memory)

That's it. Push to main, wait 2-3 minutes, and your changes are live.

Common Mistakes

Building assets inside the production image without multi-stage builds. Your production image doesn't need Node.js. Use a multi-stage build to compile assets separately.

Not caching Composer/npm installs. Copy composer.json and composer.lock before copying the rest of the application. Docker caches layers, so if your dependencies haven't changed, it won't reinstall them.

# Good - dependencies cached unless lock file changes
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --prefer-dist
COPY . .

# Bad - reinstalls everything on any file change
COPY . .
RUN composer install --no-dev --prefer-dist

Running as root. PHP-FPM should run as www-data, not root. Your Dockerfile should set appropriate permissions.

Forgetting to run Laravel optimisations. config:cache, route:cache, and view:cache make a meaningful performance difference in production. Run them in the Dockerfile build step, not at runtime.

Not setting OPcache properly. OPcache is the single biggest performance improvement for PHP in production. Configure it:

; docker/php/php.ini
opcache.enable=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=64
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0
opcache.revalidate_freq=0

The key setting is validate_timestamps=0. In production, your files don't change (new deployments create new containers), so PHP doesn't need to check if files have been modified. This saves a stat call on every request.

The Minimal Approach

You don't need Docker Swarm. You don't need Kubernetes. You don't need container orchestration. For most Laravel apps, you need:

  • A Dockerfile that builds a production image
  • A VPS with Docker and Coolify installed
  • A webhook that triggers a build on push

That's it. It's not glamorous. It's not a conference talk. It just works, and it costs £5-20/month for the VPS instead of £50-200/month for a PaaS.


I write about Laravel, AI tooling, and building software. More at stuartmason.co.uk.

Get the Friday email

What I shipped this week, what I learned, one useful thing.

No spam. Unsubscribe anytime. Privacy policy.