Docker Best Practices for Web Developers: Faster Builds, Smaller Images, Safer Containers (With Working Examples)

Docker Best Practices for Web Developers: Faster Builds, Smaller Images, Safer Containers (With Working Examples)

Docker can make your dev and deploy workflow boring—in a good way. But a “working” Dockerfile is often slow to build, huge to ship, and risky to run. This guide shows practical, copy-pasteable best practices you can apply to most web apps (Node, Python, PHP, etc.) with examples you can run today.

We’ll cover:

  • Small, repeatable images with pinned bases
  • Fast rebuilds using layer caching
  • Multi-stage builds (build tools stay out of production images)
  • Non-root containers + least privilege
  • Health checks and sane defaults
  • Local development with docker compose

1) Start With a Good Base Image (and Pin It)

Use a minimal base when possible. For many web workloads, alpine or slim images reduce size. Also pin versions to avoid surprise breakage on rebuilds.

  • Good: node:20-alpine, python:3.12-slim
  • Avoid “floating latest” for production: node:latest

Example:

# Good: pinned major version and small base FROM node:20-alpine 

If you want maximum reproducibility, pin a digest too (advanced but great for production). You’d use:

FROM node:20-alpine@sha256:... 

2) Use a .dockerignore (Big Win, Often Forgotten)

Your build context is everything Docker can “see” when building. If you accidentally send node_modules, logs, or git history, builds slow down and cache gets invalidated.

# .dockerignore node_modules dist build .git .gitignore npm-debug.log yarn-error.log .env .DS_Store 

Rule of thumb: if it’s generated or secret, ignore it.

3) Optimize Layer Caching (Copy Dependency Files First)

Docker caches layers. If you copy your whole project before installing dependencies, any code change invalidates the dependency install layer—making rebuilds painfully slow.

For Node apps, copy only package.json and lock files first, install, then copy the rest:

# Example: optimized caching pattern FROM node:20-alpine WORKDIR /app # 1) Copy only dependency manifests COPY package.json package-lock.json ./ # 2) Install deps (cached unless lockfile changes) RUN npm ci # 3) Now copy application code COPY . . CMD ["npm", "start"] 

The same idea applies to Python (requirements.txt), PHP (composer.json), etc.

4) Multi-Stage Builds: Keep Build Tools Out of Production

Multi-stage builds let you “build” in one image (with compilers, dev deps, tooling) and “run” in another (tiny, minimal). This reduces size and attack surface.

Below is a working example for a Node web app that builds a production bundle and serves it with Nginx.

# Dockerfile # --- Stage 1: build --- FROM node:20-alpine AS build WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci COPY . . # Example build output goes to /app/dist (common for Vite/React/etc.) RUN npm run build # --- Stage 2: run --- FROM nginx:1.27-alpine AS run # Copy built static assets into Nginx html directory COPY --from=build /app/dist /usr/share/nginx/html # Optional: custom Nginx config (uncomment if you have one) # COPY nginx.conf /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] 

Why this matters:

  • Your final image doesn’t include Node, npm cache, or source files.
  • Build dependencies never ship to production.
  • Final image is usually dramatically smaller.

5) Run as a Non-Root User

By default, many images run processes as root. If an attacker breaks out of your app, root inside the container increases risk. Run as a non-root user when possible.

Here’s a practical pattern for a Python FastAPI-style app (works for any Python ASGI app):

# Dockerfile (Python example) FROM python:3.12-slim # Create a non-root user RUN useradd -m appuser WORKDIR /app # Install dependencies first for caching COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy the app code COPY . . # Change ownership (so appuser can read files) RUN chown -R appuser:appuser /app USER appuser EXPOSE 8000 CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] 

Notes:

  • --no-cache-dir keeps the image smaller.
  • Some bases (like node:alpine) already include a non-root user (node), which you can use with USER node.

6) Set Environment Defaults and Don’t Bake Secrets In

Use environment variables for configuration. Don’t copy .env into images, and don’t hardcode secrets in your Dockerfile.

In your Dockerfile, you can set safe defaults:

ENV NODE_ENV=production ENV PORT=3000 

Then pass real values at runtime via Compose or your deployment platform:

# Example runtime override docker run -e PORT=8080 myapp:latest 

7) Add a Health Check (So Orchestrators Can Help You)

A health check lets Docker (and tools like Kubernetes) know whether your container is actually “healthy” and serving requests.

Example for an HTTP service:

# Add to Dockerfile (works if your image has wget or curl) HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \ CMD wget -qO- http://localhost:8000/health || exit 1 

If your base image doesn’t include wget or curl, you can:

  • Install a tiny tool (tradeoff: slightly bigger image), or
  • Use an app-level approach (some orchestrators do this outside the image).

8) Local Dev With docker compose (App + Database)

Compose helps you run realistic stacks locally. Here’s a simple setup: a web app + Postgres. It also demonstrates persistent volumes and environment variables.

# docker-compose.yml services: web: build: . ports: - "8000:8000" environment: - DATABASE_URL=postgresql://postgres:postgres@db:5432/appdb depends_on: - db db: image: postgres:16-alpine environment: - POSTGRES_PASSWORD=postgres - POSTGRES_DB=appdb ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata: 

Commands you’ll actually use:

# Build and start docker compose up --build
Stop
docker compose down
Stop and delete volumes (careful: deletes DB data)
docker compose down -v

9) Practical Checklist: “Is My Docker Setup Good?”

  • .dockerignore exists and excludes large/generated files
  • Dependency install step is cached (copy lockfile before source)
  • Multi-stage build used when compilation/build tools are needed
  • Final image is minimal (no build deps, no source if not needed)
  • Runs as non-root (USER set)
  • Secrets are not baked into the image
  • Health check exists (or is handled by the platform)
  • Compose config supports local stack with volumes for DB state

10) Quick “Before vs After” Example (Common Fix)

Here’s a classic “slow builds” anti-pattern:

# Bad: any code change forces npm install again COPY . . RUN npm install 

Replace with:

# Good: dependency layer cached COPY package.json package-lock.json ./ RUN npm ci COPY . . 

This one change often cuts rebuild time from minutes to seconds.

Wrap-Up

Docker best practices aren’t about perfection—they’re about repeatability, speed, and safer defaults. If you implement only three things from this article, make it: a solid .dockerignore, caching-friendly dependency layers, and multi-stage builds. Then add non-root users and health checks as you mature your deployment pipeline.

If you want, I can adapt these patterns to your specific stack (Next.js, Laravel + PHP-FPM, FastAPI + Uvicorn, Angular + Nginx, etc.) and generate a tailored Dockerfile + docker-compose.yml you can paste into your repo.


Leave a Reply

Your email address will not be published. Required fields are marked *