Docker Best Practices You’ll Actually Use: Faster Builds, Smaller Images, Safer Containers (Hands-On)
Docker can make development and deployment smoother—but only if your images build quickly, run reliably, and don’t ship accidental security risks. This guide is a hands-on checklist for junior/mid developers: how to write better Dockerfiles, speed up builds, slim images, and run containers more safely. We’ll use a simple Node.js API as the example, but the practices apply to most stacks.
1) Start With a Good Base Image (and Pin It)
Prefer official images and slim variants when possible (e.g., -slim for Debian-based images, or alpine when your dependencies allow it). Pin to a specific major/minor version to avoid surprise breakages.
- Good:
node:20-slim - Better (more stable):
node:20.11-slim
“Pinning” helps reproducibility: the same source builds the same image next week.
2) Use a .dockerignore to Keep the Build Context Small
Your build context is everything Docker sends to the daemon during docker build. If you accidentally send node_modules, logs, test artifacts, or .git, builds become slow and cache invalidation gets worse.
# .dockerignore node_modules npm-debug.log .git .gitignore dist coverage .env .DS_Store
This single file often speeds up builds immediately.
3) Layer Caching: Copy Dependency Manifests First
Docker caches layers. You want dependency installation to be cached unless dependency files change. The pattern is:
- Copy
package.json/package-lock.json(or equivalent) first - Install dependencies
- Then copy the rest of the source
Here’s a solid baseline Dockerfile for a Node.js API:
# Dockerfile FROM node:20.11-slim WORKDIR /app # 1) Copy only dependency manifests first (better caching) COPY package.json package-lock.json ./ # 2) Install production dependencies RUN npm ci --omit=dev # 3) Copy the rest of the application COPY . . EXPOSE 3000 CMD ["node", "server.js"]
If you change a single .js file, Docker won’t reinstall dependencies—builds stay fast.
4) Prefer Multi-Stage Builds (Build Tools Don’t Belong in Production)
Many apps need build tooling (TypeScript, bundlers, compilers). Multi-stage builds let you compile in one stage and ship only the final output.
Example: TypeScript Node API that builds to dist/:
# Dockerfile (multi-stage) # --- Build stage --- FROM node:20.11-slim AS build WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci COPY . . RUN npm run build # outputs dist/ # --- Runtime stage --- FROM node:20.11-slim AS runtime WORKDIR /app ENV NODE_ENV=production # Copy only what's needed to run COPY package.json package-lock.json ./ RUN npm ci --omit=dev COPY --from=build /app/dist ./dist # Optional: if you have config/templates/static assets: # COPY --from=build /app/public ./public EXPOSE 3000 CMD ["node", "dist/server.js"]
Result: smaller image, fewer packages, and less attack surface.
5) Run as a Non-Root User
By default, many images run as root. If an attacker exploits your app, root inside a container is still bad news. Create a user and run the process under it.
# Dockerfile snippet (add to runtime stage) RUN useradd -m -u 10001 appuser USER appuser
If your app needs to bind to port 80 (privileged), prefer using a reverse proxy (like Nginx) or map container port 3000 to host 80 in your platform config.
6) Add a Health Check (So Orchestrators Can Help You)
Kubernetes, Docker Compose, and other systems can restart unhealthy containers—but only if you provide a signal. A simple HTTP health endpoint is enough.
Example health endpoint in Node:
// server.js const http = require("http"); const server = http.createServer((req, res) => { if (req.url === "/health") { res.writeHead(200, { "Content-Type": "application/json" }); res.end(JSON.stringify({ ok: true })); return; } res.writeHead(200); res.end("Hello Docker"); }); server.listen(3000, () => console.log("Listening on :3000"));
Now add a Docker healthcheck:
# Dockerfile snippet HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \ CMD node -e "fetch('http://localhost:3000/health').then(r=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))"
Tip: keep health checks fast and dependency-free—don’t call external services inside a health check.
7) Use Environment Variables (But Don’t Bake Secrets Into Images)
Configuration should be injected at runtime. Never copy .env into the image for production. A common pattern:
- Development: use
.envlocally anddocker composeenv_file - Production: use your platform’s secrets manager / environment config
Example docker-compose.yml for local development:
services: api: build: . ports: - "3000:3000" environment: NODE_ENV: development PORT: "3000" # For local dev only: # env_file: # - .env
Don’t commit secret files. Don’t put secrets in ARG or ENV in the Dockerfile for production images.
8) Make Builds Even Faster with BuildKit Cache Mounts (Optional but Powerful)
If your environment supports BuildKit (most modern Docker installs do), you can speed up dependency installation by caching package manager directories across builds.
Example for npm:
# syntax=docker/dockerfile:1.6 FROM node:20.11-slim AS build WORKDIR /app COPY package.json package-lock.json ./ RUN --mount=type=cache,target=/root/.npm \ npm ci COPY . . RUN npm run build
This reduces repeated downloads between builds, especially in CI.
9) Keep Containers “One Process, One Responsibility”
A common beginner mistake is stuffing everything into one container: app + database + cron + queue worker. Keep responsibilities separate. In docker compose, run distinct services.
services: api: build: . ports: - "3000:3000" depends_on: - db db: image: postgres:16 environment: POSTGRES_PASSWORD: postgres volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata:
This makes scaling, restarts, and debugging much easier.
10) A Practical “Production-ish” Dockerfile Template
Here’s a compact template combining the most useful practices (multi-stage, non-root, cache-friendly ordering):
# Dockerfile # --- Build stage --- FROM node:20.11-slim AS build WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci COPY . . RUN npm run build # --- Runtime stage --- FROM node:20.11-slim AS runtime WORKDIR /app ENV NODE_ENV=production COPY package.json package-lock.json ./ RUN npm ci --omit=dev COPY --from=build /app/dist ./dist RUN useradd -m -u 10001 appuser USER appuser EXPOSE 3000 HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \ CMD node -e "fetch('http://localhost:3000/health').then(r=>process.exit(r.ok?0:1)).catch(()=>process.exit(1))" CMD ["node", "dist/server.js"]
Quick Verification Commands
- Build:
docker build -t my-api:local . - Run:
docker run --rm -p 3000:3000 my-api:local - Health:
curl -i http://localhost:3000/health - Inspect size:
docker image ls my-api:local
Common Pitfalls (and How to Avoid Them)
-
“Why is my rebuild so slow?” You copied all source files before installing deps. Copy dependency manifests first and use
.dockerignore. -
“Why is my image huge?” You shipped build tools. Use multi-stage builds and only copy runtime artifacts.
-
“It works locally but fails in prod.” You relied on a local file (like
.env) that isn’t provided in production. Use environment variables and platform secrets. -
“My container is running as root.” Add a non-root user and switch with
USER.
Wrap-Up
Great Docker usage is mostly about discipline: small build contexts, cache-friendly layering, multi-stage builds, and safe runtime defaults. If you adopt just three changes—.dockerignore, copying dependency files first, and multi-stage builds—you’ll usually see immediate wins in build speed and image size, and you’ll ship something easier to run in production.
Leave a Reply