Docker Best Practices in Practice: Smaller Images, Faster Builds, Safer Containers (Hands-On)
If you’ve ever waited minutes for a Docker build, pushed a massive image to a registry, or debugged “works on my machine” issues, this article is for you. We’ll build a practical checklist you can apply today: multi-stage builds, smarter caching with BuildKit, secure defaults (non-root), and clean runtime images.
We’ll use a simple Node.js API as the example, but the patterns apply to Python, PHP, Go—anything.
1) Start with a baseline: a “naive” Dockerfile (and why it hurts)
A common first attempt looks like this:
# ❌ Naive Dockerfile (works, but slow + big + less secure) FROM node:20 WORKDIR /app COPY . . RUN npm install RUN npm run build EXPOSE 3000 CMD ["npm", "start"]
-
Big image: includes build tools, dev deps, and all your source.
-
Slow rebuilds: any file change invalidates cache for
npm install. -
Security: runs as root by default.
Let’s fix those with best practices that are easy to copy/paste.
2) Use multi-stage builds to ship only what you need
Multi-stage builds let you compile/build in one stage and copy only the final artifacts into a slim runtime stage.
# ✅ Multi-stage Dockerfile (production-ready pattern) # syntax=docker/dockerfile:1.7 FROM node:20-alpine AS deps WORKDIR /app # Copy only dependency manifests first for better caching COPY package.json package-lock.json ./ # Install dependencies (cache-friendly) RUN npm ci FROM node:20-alpine AS build WORKDIR /app # Reuse installed node_modules from deps stage COPY --from=deps /app/node_modules ./node_modules COPY . . # Build your app (e.g., TypeScript, bundling, etc.) RUN npm run build FROM node:20-alpine AS runtime WORKDIR /app ENV NODE_ENV=production # Copy only what runtime needs COPY --from=build /app/dist ./dist COPY package.json package-lock.json ./ # Install only production deps RUN npm ci --omit=dev && npm cache clean --force EXPOSE 3000 CMD ["node", "dist/server.js"]
-
Smaller images: runtime stage doesn’t include your entire source tree or build tooling.
-
Cleaner runtime: only
dist+ prod dependencies. -
More reliable builds:
npm ciuses the lockfile exactly.
Tip: for Python, the same idea applies—compile wheels in a builder stage, copy wheels + app code into a minimal runtime base.
3) Make builds fast with BuildKit caching (huge win)
Docker BuildKit can cache package manager downloads between builds. With Node, you can cache npm’s directory so repeated installs are much faster.
# ✅ Add BuildKit cache mounts to npm install steps # syntax=docker/dockerfile:1.7 FROM node:20-alpine AS deps WORKDIR /app COPY package.json package-lock.json ./ RUN --mount=type=cache,target=/root/.npm \ npm ci
Now build with BuildKit enabled:
# One-time for a shell session: export DOCKER_BUILDKIT=1 # Build: docker build -t my-api:latest .
This doesn’t change the final image—just speeds up builds by caching downloads.
4) Don’t leak junk into your image: add a .dockerignore
A missing .dockerignore is a silent performance killer. Docker sends your build context to the daemon; if that includes node_modules, logs, and test artifacts, builds slow down and caches invalidate more often.
# ✅ .dockerignore node_modules npm-debug.log Dockerfile .dockerignore .git .gitignore dist coverage *.log .env .env.*
-
Why ignore
dist? If you build inside Docker (recommended), you don’t want local build output affecting the context and cache. -
Why ignore
.env? Keep secrets out of images and out of build context.
5) Run as a non-root user (secure by default)
Many official images run as root unless you change it. Running as non-root is a simple hardening step that prevents a whole category of container escapes from becoming worse.
FROM node:20-alpine AS runtime WORKDIR /app # Create a non-root user RUN addgroup -S app && adduser -S app -G app ENV NODE_ENV=production COPY --chown=app:app --from=build /app/dist ./dist COPY --chown=app:app package.json package-lock.json ./ RUN --mount=type=cache,target=/root/.npm \ npm ci --omit=dev && npm cache clean --force USER app EXPOSE 3000 CMD ["node", "dist/server.js"]
Two practical notes:
-
--chownavoids permission problems at runtime. -
If you write files, write them under
/tmpor a volume with correct permissions.
6) Handle configuration correctly: environment variables, not baked-in secrets
Junior mistake: baking environment-specific config into the image. Better: ship the same image everywhere and configure via env vars at runtime.
Example: a minimal app reading PORT and DATABASE_URL:
// server.js (example) const http = require("http"); const port = process.env.PORT || 3000; const dbUrl = process.env.DATABASE_URL || "not-set"; http.createServer((req, res) => { res.writeHead(200, { "Content-Type": "application/json" }); res.end(JSON.stringify({ ok: true, port, dbUrlSet: dbUrl !== "not-set" })); }).listen(port, () => console.log(`Listening on ${port}`));
Run it with environment variables:
docker run --rm -p 3000:3000 \ -e PORT=3000 \ -e DATABASE_URL="postgres://user:pass@db:5432/app" \ my-api:latest
Rule: don’t COPY your .env into the image. Use runtime configuration via your orchestrator (Compose, Kubernetes, ECS, etc.).
7) Use Docker Compose for local dev without polluting production images
Local development often needs hot reload and volume mounts. You can do that with Compose, while keeping your production image clean.
# docker-compose.yml (dev-friendly) services: api: build: context: . target: deps working_dir: /app command: npm run dev ports: - "3000:3000" environment: - PORT=3000 - DATABASE_URL=postgres://user:pass@db:5432/app volumes: - ./:/app - api_node_modules:/app/node_modules db: image: postgres:16-alpine environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=pass - POSTGRES_DB=app ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata: api_node_modules:
Key trick: mount the project folder, but keep node_modules in a named volume so you don’t overwrite container dependencies with your host’s.
8) Add a health check (small effort, big ops benefit)
Health checks help orchestrators know when your app is actually ready. If your app has a /health endpoint, add this:
HEALTHCHECK --interval=10s --timeout=2s --retries=3 CMD \ wget -qO- http://127.0.0.1:3000/health || exit 1
For Alpine images, wget is often available; otherwise you can install curl (but don’t add tools you don’t need).
9) A practical “final checklist” you can reuse
-
Multi-stage builds: keep runtime images minimal.
-
Cache dependencies: copy manifests first; use BuildKit cache mounts where possible.
-
.dockerignore: exclude
.git,node_modules, logs, env files, local builds. -
Non-root user: add a user and use
USERin runtime stage. -
Runtime config: environment variables; no secrets in the image.
-
Dev vs prod: Compose volumes for dev; slim image for prod.
-
Health checks: give your platform a way to detect readiness.
10) Full working example (copy/paste)
Here’s a complete production-oriented Dockerfile with the improvements above:
# syntax=docker/dockerfile:1.7 FROM node:20-alpine AS deps WORKDIR /app COPY package.json package-lock.json ./ RUN --mount=type=cache,target=/root/.npm \ npm ci FROM node:20-alpine AS build WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build FROM node:20-alpine AS runtime WORKDIR /app ENV NODE_ENV=production RUN addgroup -S app && adduser -S app -G app COPY --chown=app:app --from=build /app/dist ./dist COPY --chown=app:app package.json package-lock.json ./ RUN --mount=type=cache,target=/root/.npm \ npm ci --omit=dev && npm cache clean --force USER app EXPOSE 3000 CMD ["node", "dist/server.js"]
Build and run:
export DOCKER_BUILDKIT=1 docker build -t my-api:latest . docker run --rm -p 3000:3000 -e PORT=3000 my-api:latest
Once you apply these patterns, you’ll feel the difference immediately: faster rebuilds, smaller pushes, fewer weird environment bugs, and a more secure default posture—without becoming a Docker expert.
Leave a Reply