Docker Best Practices for Web Developers: Smaller Images, Faster Builds, Safer Containers (Hands-On)

Docker Best Practices for Web Developers: Smaller Images, Faster Builds, Safer Containers (Hands-On)

Docker can make local development and deployments far more predictable—until images become huge, builds take forever, and production containers run as root with secrets baked in. This hands-on guide shows a practical set of Docker best practices you can apply today, with working examples you can copy into your projects.

1) Start with a .dockerignore (the easiest speed win)

Docker sends your build “context” (the folder contents) to the Docker daemon. If you forget to exclude junk (like node_modules), your builds will be slower and your images may accidentally include files you never intended.

Create a .dockerignore at your project root:

# .dockerignore node_modules dist build .cache .git .gitignore Dockerfile docker-compose.yml .env *.log coverage .vscode .idea 

Keep it tight: include only what the image truly needs.

2) Use multi-stage builds (ship less, build faster)

Multi-stage builds let you compile/build in one stage (with heavy tooling) and run in another stage (minimal runtime). This drastically reduces final image size and attack surface.

Example: a Node.js app built with a bundler (Vite/Next/React static build). The pattern works even if your framework differs.

# Dockerfile # --- Stage 1: build --- FROM node:20-alpine AS build WORKDIR /app # Copy only package files first for better layer caching COPY package*.json ./ RUN npm ci # Then copy the rest of the code COPY . . RUN npm run build # --- Stage 2: run --- FROM nginx:1.27-alpine # Copy built assets into nginx COPY --from=build /app/dist /usr/share/nginx/html # Optional: custom nginx config for SPA routing # COPY nginx.conf /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] 

Why this helps: your runtime image doesn’t include npm, build tools, or source code—only the compiled output.

3) Cache dependencies correctly (stop re-installing every build)

Docker layer caching is simple: if a layer’s inputs don’t change, Docker reuses it. Put “slow and stable” steps (like installing dependencies) early and copy the rest later.

  • Copy package.json/package-lock.json first
  • Run npm ci
  • Copy application source
  • Run build

You saw this in the previous Dockerfile. For Python apps, it looks like this:

# Dockerfile (Python example) FROM python:3.12-slim WORKDIR /app # System deps (if needed) RUN apt-get update && apt-get install -y --no-install-recommends \ curl \ && rm -rf /var/lib/apt/lists/* # Copy requirements first COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy source after deps are installed COPY . . CMD ["python", "app.py"] 

4) Prefer slim/alpine images… but know when to avoid Alpine

Smaller base images reduce download time and vulnerabilities. Common choices:

  • node:20-alpine (very small, good default for many Node builds)
  • python:3.12-slim (great balance for Python apps)
  • debian:bookworm-slim for custom runtimes

Alpine uses musl instead of glibc. Most apps are fine, but some native dependencies can be painful. If you see weird build/runtime issues with compiled packages, switching to -slim is often the fastest fix.

5) Don’t run as root (make “least privilege” your default)

Running as root inside the container increases the blast radius if something goes wrong. Create a non-root user and switch to it.

# Dockerfile (Node runtime example) FROM node:20-slim WORKDIR /app # Create a non-root user RUN useradd -m appuser COPY package*.json ./ RUN npm ci --omit=dev COPY . . # Fix ownership (only if your app writes to disk) RUN chown -R appuser:appuser /app USER appuser EXPOSE 3000 CMD ["node", "server.js"] 

Tip: if you can make the container filesystem read-only (see section 9), you’ll catch “oops, we write to disk” issues early.

6) Use environment variables for configuration (and keep secrets out of images)

Never bake secrets into your Dockerfile (API keys, DB passwords). Treat images as public artifacts. Use environment variables at runtime.

Example docker-compose.yml snippet:

services: app: build: . ports: - "3000:3000" environment: - NODE_ENV=production - DATABASE_URL=${DATABASE_URL} env_file: - .env 

What not to do:

# BAD: secrets baked into the image layer history ENV DATABASE_URL="postgres://user:pass@db:5432/app" 

Instead, inject secrets via CI/CD secret stores, container orchestration secrets, or environment variables provided at deploy time.

7) Add a healthcheck (so Docker knows when “up” is actually up)

A container can be “running” while the app inside is broken. Healthchecks enable better restarts and more reliable orchestration.

# Dockerfile HEALTHCHECK --interval=10s --timeout=3s --retries=3 \ CMD wget -qO- http://localhost:3000/health || exit 1 

Your app should expose a lightweight endpoint like /health that returns 200 quickly.

8) Pin versions (reproducible builds beat “works today”)

Unpinned dependencies are a common reason builds break unexpectedly.

  • Pin your base image: node:20.11-slim (or use a digest in stricter environments)
  • Use lockfiles: package-lock.json, poetry.lock, composer.lock
  • Prefer npm ci over npm install in CI

This makes your builds predictable across machines and time.

9) Harden runtime: read-only filesystem + drop privileges

Even without Kubernetes, you can add basic hardening with Docker flags. Here’s a practical docker run example:

docker run --rm -p 3000:3000 \ --read-only \ --tmpfs /tmp \ --cap-drop=ALL \ --security-opt no-new-privileges \ myapp:latest 

What this does:

  • --read-only prevents unexpected writes
  • --tmpfs /tmp provides a writable temp folder
  • --cap-drop=ALL removes Linux capabilities you probably don’t need
  • no-new-privileges blocks privilege escalation tricks

If your app needs to write uploads/logs, mount a dedicated volume for only that path.

10) Keep logs going to stdout/stderr (don’t write log files in containers)

In containerized environments, log aggregation typically reads stdout/stderr. Logging to files leads to lost logs or disk bloat.

  • Node: log with console.log / structured JSON logs
  • Python: configure logging handler to stdout
  • Nginx: default is already stdout/stderr in many container images

If you must keep files, mount a volume and rotate logs intentionally.

11) A complete “good default” Compose setup (app + database)

This example shows a practical local setup with a web app and Postgres, including a named volume and safe defaults.

services: web: build: . ports: - "3000:3000" environment: - DATABASE_URL=postgres://app:app@db:5432/app - NODE_ENV=development depends_on: db: condition: service_healthy db: image: postgres:16-alpine environment: - POSTGRES_USER=app - POSTGRES_PASSWORD=app - POSTGRES_DB=app volumes: - dbdata:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U app -d app"] interval: 5s timeout: 3s retries: 10 volumes: dbdata: 

Notice:

  • Database data is persisted via dbdata
  • Healthchecks avoid “app starts before DB is ready” flakiness
  • App config is via env vars (easy to switch per environment)

12) Quick checklist you can apply to existing projects

  • Add .dockerignore to reduce build context
  • Use multi-stage builds for production
  • Copy lockfiles first to maximize caching
  • Run as a non-root user
  • Don’t bake secrets into images
  • Pin versions and rely on lockfiles
  • Add a healthcheck
  • Prefer stdout/stderr logging
  • Harden runtime with read-only FS and dropped caps when possible

If you implement just the first three items, you’ll feel the improvement immediately. Add the security hardening steps as you mature your deployment pipeline.


Leave a Reply

Your email address will not be published. Required fields are marked *