Docker Best Practices for Web Dev: Faster Builds, Smaller Images, Safer Containers
If you’re using Docker for a web app, you’re probably aiming for three things: repeatable builds, fast local dev, and production images that are small and secure. This article walks through a practical setup you can copy into real projects, with working examples for a typical Node.js API (the same ideas apply to Python, PHP, etc.).
We’ll cover:
- How to structure a
Dockerfilefor speed (layer caching) and size (multi-stage builds) - Running as a non-root user
- Using
.dockerignorecorrectly - Local development with
docker compose - Simple runtime hardening (health checks, read-only filesystem, dropping caps)
1) Start with a good mental model: layers and cache
Docker builds images in layers. If a layer hasn’t changed, Docker can reuse it. Your goal is to structure the Dockerfile so the “expensive” steps (like installing dependencies) are cached as often as possible.
Rule of thumb: copy dependency files first (like package.json), install deps, then copy the rest of the source code.
2) Add a .dockerignore (seriously)
Without .dockerignore, Docker sends your entire project directory as build context—including node_modules, logs, and test artifacts. That makes builds slower and can leak secrets.
# .dockerignore node_modules npm-debug.log yarn-error.log .git .gitignore .env *.pem *.key dist build coverage .DS_Store
Tip: if your app builds a dist/ folder, ignore it unless you explicitly need it in the image.
3) A production Dockerfile: multi-stage + non-root
Multi-stage builds let you use a “builder” stage with dev tooling, then copy only what you need into a smaller runtime stage. Here’s a practical Node.js example (Express, Fastify, Nest, etc. all work similarly).
# Dockerfile # syntax=docker/dockerfile:1.7 FROM node:20-alpine AS deps WORKDIR /app # Copy only dependency manifests first (for caching) COPY package.json package-lock.json ./ RUN npm ci FROM node:20-alpine AS build WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # If you have TypeScript or a build step: # RUN npm run build # Otherwise, you can skip. # --- Runtime image --- FROM node:20-alpine AS runner WORKDIR /app ENV NODE_ENV=production # Create an unprivileged user RUN addgroup -S app && adduser -S app -G app # Copy only what's needed at runtime COPY --from=deps /app/node_modules ./node_modules COPY . . # If you build to dist/, copy dist only: # COPY --from=build /app/dist ./dist # COPY package.json ./ # Drop privileges USER app EXPOSE 3000 # Basic health check (adjust path/port) HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \ CMD wget -qO- http://127.0.0.1:3000/health || exit 1 CMD ["node", "server.js"]
Why this works well:
npm ciis cached unless your lockfile changes.- Runtime image doesn’t need compilers/build tools.
- The container runs as a non-root user (
app), which reduces blast radius.
Common pitfall: copying . before installing dependencies breaks caching, because any code change invalidates the dependency layer.
4) Build it and run it (production-style)
From your project root:
# Build docker build -t myapp:prod . # Run docker run --rm -p 3000:3000 myapp:prod
If your app doesn’t already have a health endpoint, add a tiny one (example Express):
// server.js (Express example) const express = require("express"); const app = express(); app.get("/health", (req, res) => res.status(200).json({ ok: true })); app.listen(3000, () => console.log("Listening on :3000"));
5) Local development with docker compose (hot reload + DB)
Production images should be lean. Local development is different: you want fast feedback and live reload. The easiest approach is a compose file that mounts your source code into the container.
Example: Node app + Postgres.
# compose.yaml services: web: image: node:20-alpine working_dir: /app command: sh -c "npm ci && npm run dev" ports: - "3000:3000" volumes: - ./:/app environment: - NODE_ENV=development - DATABASE_URL=postgres://postgres:postgres@db:5432/app depends_on: - db db: image: postgres:16-alpine environment: - POSTGRES_PASSWORD=postgres - POSTGRES_DB=app ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata:
Run it:
docker compose up
Notes for juniors:
volumes: ./:/appmeans your local files appear inside the container. Tools likenodemonwill reload on changes.- We use a named volume (
pgdata) so your database persists across restarts. - For Python/PHP stacks, the pattern is identical: mount code, run dev server, link a DB container.
6) Optimize build times with BuildKit cache mounts (optional but great)
BuildKit can cache package manager downloads. This can dramatically speed up CI builds. Here’s a Node example using a cache mount for npm:
# Dockerfile (deps stage) FROM node:20-alpine AS deps WORKDIR /app COPY package.json package-lock.json ./ # Cache npm's download directory across builds RUN --mount=type=cache,target=/root/.npm \ npm ci
To ensure BuildKit is used, set:
export DOCKER_BUILDKIT=1
7) Runtime hardening: easy wins
You can make containers safer without becoming a security expert:
- Run as non-root (we did that with
USER app). - Read-only filesystem (if your app doesn’t need to write to disk).
- Drop Linux capabilities to reduce privilege.
- Set memory/CPU limits to avoid noisy-neighbor issues.
Example hardened runtime flags:
docker run --rm -p 3000:3000 \ --read-only \ --tmpfs /tmp \ --cap-drop=ALL \ --memory=512m --cpus=1 \ myapp:prod
If your app needs to write uploads, logs, or cache files, mount a specific writable directory instead of allowing writes everywhere.
8) Keep images small (and predictable)
Practical checklist:
- Prefer
alpineor slim base images where possible. - Use multi-stage builds to avoid shipping compilers and dev dependencies.
- Pin major versions (example:
node:20-alpine) so builds don’t “mysteriously” change. - Don’t copy secrets into images. Pass them at runtime via env vars or secret managers.
You can inspect image size and layers:
docker image ls docker history myapp:prod
9) A tiny Makefile for team-friendly commands
Small teams love consistent commands. A Makefile can wrap common Docker tasks:
# Makefile APP_IMAGE=myapp:prod build: docker build -t $(APP_IMAGE) . run: docker run --rm -p 3000:3000 $(APP_IMAGE) dev: docker compose up down: docker compose down clean: docker system prune -f
Now everyone can run make dev or make build without remembering flags.
Wrap-up: the “good defaults” you can copy today
- Use
.dockerignoreto keep builds fast and reduce risk. - Structure your
Dockerfilefor caching: copy lockfiles first, install deps, then copy source. - Use multi-stage builds to ship only runtime artifacts.
- Run as non-root, add a healthcheck, and consider read-only filesystem + dropped caps.
- Use
docker composefor local dev with mounted code and a real database.
If you want, I can adapt these examples to your stack (Python + Gunicorn, Laravel + PHP-FPM + Nginx, or a React/Vite frontend) while keeping the same best-practice structure.
Leave a Reply