Docker Best Practices for Web Apps: Smaller Images, Faster Builds, Safer Containers

Docker Best Practices for Web Apps: Smaller Images, Faster Builds, Safer Containers

If you’ve used Docker a bit, you’ve probably hit at least one of these: giant images, slow rebuilds, “works on my machine,” or containers running as root. This guide focuses on practical, repeatable Docker patterns for web apps that improve build speed, runtime performance, and security—without turning your setup into a science project.

1) Start with a Good Base Image (and Pin It)

Your base image sets the tone for size and security. Prefer official images and use “slim” variants when you can. Also pin versions so you don’t get surprise breakages on rebuild.

  • Prefer node:20-slim over node:latest
  • Prefer python:3.12-slim over python:latest
  • Pin major/minor versions at minimum (e.g., 20 or 20.11)

Why it matters: smaller base images reduce download time and the number of packages you’re responsible for patching.

2) Use a .dockerignore (Build Speed Win in 30 Seconds)

Docker sends a “build context” (your project files) to the daemon. If that context includes node_modules, logs, and git history, your builds slow down and cache gets busted.

# .dockerignore node_modules dist coverage *.log .git .gitignore .env .DS_Store

This is one of the easiest improvements you can make—especially for JS projects.

3) Multi-Stage Builds: Build Big, Ship Small

Multi-stage builds let you use heavy tooling (compilers, dev dependencies) in a build stage, then copy only the final artifacts into a clean runtime stage.

Here’s a practical example for a Node + TypeScript API that produces a small runtime image.

# Dockerfile # syntax=docker/dockerfile:1 FROM node:20-slim AS deps WORKDIR /app # Copy only dependency manifests first (cache-friendly) COPY package.json package-lock.json ./ RUN npm ci FROM node:20-slim AS build WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build FROM node:20-slim AS runner WORKDIR /app ENV NODE_ENV=production # Create a non-root user RUN useradd -m appuser # Copy only what you need at runtime COPY --from=build /app/dist ./dist COPY --from=deps /app/node_modules ./node_modules COPY package.json ./ USER appuser EXPOSE 3000 CMD ["node", "dist/server.js"]

Key pattern: copy dependency files first → install → copy source → build. This maximizes Docker layer caching so you don’t reinstall dependencies on every change.

4) Layer Caching: Put “Stable” Steps First

Docker rebuilds layers from the first change it detects. If you copy your whole repo before installing dependencies, every code change invalidates the dependency layer.

  • Copy package.json/package-lock.json first
  • Install dependencies
  • Then copy the rest of your source

For Python, the same idea applies: copy requirements.txt first, install, then copy the app.

# Dockerfile (Python example) FROM python:3.12-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "app.py"]

5) Run as Non-Root (Basic Container Security)

By default, many images run as root. If your app is compromised, an attacker starts with elevated permissions inside the container. Running as a non-root user reduces blast radius.

In Debian-based images (like -slim), you can add a user:

RUN useradd -m appuser USER appuser

If you need to bind to port 80 as non-root, prefer mapping ports instead:

# Host 8080 → Container 3000 docker run -p 8080:3000 your-image

6) Healthchecks: Make “Is It Alive?” Automatic

Healthchecks help orchestrators (Docker Compose, Swarm, Kubernetes) detect unhealthy containers and restart them. Add a simple HTTP health endpoint in your app (e.g., /health), then wire it up:

# Dockerfile HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \ CMD node -e "fetch('http://localhost:3000/health').then(r=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"

If your runtime doesn’t have fetch available, use curl (but that may require installing it). Another option is a tiny in-app script that checks critical dependencies.

7) Use Docker Compose for Local Dev (But Separate Dev vs Prod)

Compose is great for local environments: app + database + cache in one command. Keep it explicit and readable.

# docker-compose.yml services: api: build: . ports: - "3000:3000" environment: - NODE_ENV=development - DATABASE_URL=postgres://postgres:postgres@db:5432/app depends_on: db: condition: service_healthy db: image: postgres:16 environment: - POSTGRES_PASSWORD=postgres - POSTGRES_DB=app ports: - "5432:5432" healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 10s timeout: 3s retries: 5

Tip: In dev, you might mount your source code as a volume for hot reload. In prod, you usually don’t—prod should run immutable images.

8) Keep Secrets Out of Images

Never bake secrets into Docker images (or commit them). Avoid copying .env into the image and don’t set sensitive values in the Dockerfile. Pass secrets at runtime via environment variables or your platform’s secret manager.

  • Don’t do: COPY .env .
  • Do: docker run -e API_KEY=... your-image (or Compose environment:)
  • Better: platform secrets (GitHub Actions secrets, Kubernetes secrets, etc.)

9) BuildKit Cache Mounts (Optional, Big Payoff)

If you’re building frequently (CI), BuildKit can speed up dependency installs using cache mounts. This is optional but very effective.

Example for npm:

# syntax=docker/dockerfile:1.5 FROM node:20-slim AS deps WORKDIR /app COPY package.json package-lock.json ./ RUN --mount=type=cache,target=/root/.npm npm ci

To enable BuildKit locally:

DOCKER_BUILDKIT=1 docker build -t your-image .

10) A Quick “Good Dockerfile” Checklist

  • ✅ Use a pinned, official base image (20-slim, 3.12-slim)
  • ✅ Add .dockerignore to keep build context small
  • ✅ Structure layers for caching (copy manifests → install → copy source)
  • ✅ Use multi-stage builds for compiled/bundled apps
  • ✅ Run as non-root where possible
  • ✅ Add a healthcheck for long-running services
  • ✅ Keep secrets out of images and git
  • ✅ Use Compose for local stacks (app + db + cache)

Putting It Together: A Practical Build + Run Flow

Once you have a clean Dockerfile, your workflow becomes predictable:

# Build docker build -t my-api:latest . # Run locally docker run --rm -p 3000:3000 -e NODE_ENV=production my-api:latest # Or run the full stack with Compose docker compose up --build

With these patterns, you’ll get faster rebuilds, smaller images, fewer “it broke after pulling latest,” and containers that behave more like production. The best part is that none of this is advanced magic—just a set of habits that compound over time.


Leave a Reply

Your email address will not be published. Required fields are marked *