CI/CD Pipelines in Practice: Ship a Dockerized Web App with GitHub Actions + Tests + Deploy (Hands-On)

CI/CD Pipelines in Practice: Ship a Dockerized Web App with GitHub Actions + Tests + Deploy (Hands-On)

CI/CD (Continuous Integration / Continuous Delivery) is basically: “every change gets tested automatically, and if it passes, it can be deployed safely.” For junior/mid devs, the fastest way to make this real is to wire up one pipeline that:

  • Runs on every pull request (tests + lint)
  • Builds a Docker image on pushes to main
  • Publishes that image to a registry
  • Deploys to a server using docker compose

This article walks you through a practical pipeline with working configs you can copy-paste and adapt. We’ll use a simple Node.js app because it’s common in web projects, but the pipeline patterns work for any stack.

Project layout (minimal but realistic)

Here’s the tiny repo structure we’ll target:

. ├── src/ │ └── server.js ├── test/ │ └── health.test.js ├── package.json ├── package-lock.json ├── Dockerfile ├── docker-compose.yml └── .github/ └── workflows/ └── ci-cd.yml 

Step 1: A small web app + health endpoint

src/server.js (Express, one route, one health check):

const express = require("express"); const app = express(); const port = process.env.PORT || 3000; app.get("/", (req, res) => { res.status(200).send("Hello from CI/CD!"); }); app.get("/health", (req, res) => { res.status(200).json({ ok: true }); }); if (require.main === module) { app.listen(port, () => { console.log(`Listening on ${port}`); }); } module.exports = app; 

package.json with scripts for CI:

{ "name": "cicd-demo", "version": "1.0.0", "main": "src/server.js", "type": "commonjs", "scripts": { "start": "node src/server.js", "test": "node --test", "lint": "node -e \"console.log('lint ok')\"" }, "dependencies": { "express": "^4.19.2" } } 

Basic test using Node’s built-in test runner (no Jest needed). test/health.test.js:

const test = require("node:test"); const assert = require("node:assert/strict"); const http = require("node:http"); const app = require("../src/server"); function listen(app) { return new Promise((resolve) => { const server = app.listen(0, () => resolve(server)); }); } function request(server, path) { const { port } = server.address(); return new Promise((resolve, reject) => { const req = http.get({ hostname: "127.0.0.1", port, path }, (res) => { let data = ""; res.on("data", (chunk) => (data += chunk)); res.on("end", () => resolve({ status: res.statusCode, body: data })); }); req.on("error", reject); }); } test("GET /health returns ok", async () => { const server = await listen(app); try { const res = await request(server, "/health"); assert.equal(res.status, 200); const json = JSON.parse(res.body); assert.equal(json.ok, true); } finally { server.close(); } }); 

Step 2: Dockerize it (production-friendly basics)

Dockerfile (small-ish, cache-friendly):

# syntax=docker/dockerfile:1 FROM node:20-alpine AS deps WORKDIR /app # Copy only package files first for better caching COPY package.json package-lock.json ./ RUN npm ci --omit=dev FROM node:20-alpine AS runner WORKDIR /app ENV NODE_ENV=production COPY --from=deps /app/node_modules ./node_modules COPY src ./src EXPOSE 3000 CMD ["node", "src/server.js"] 

docker-compose.yml for your server deployment:

services: web: image: ghcr.io/your-org-or-user/cicd-demo:latest restart: unless-stopped ports: - "80:3000" environment: - PORT=3000 

Locally, you can run:

docker build -t cicd-demo:local . docker run --rm -p 3000:3000 cicd-demo:local 

Step 3: CI for PRs (tests first, always)

The first part of your pipeline should run on pull requests. If tests fail, you don’t want deploy steps to even be possible.

Create .github/workflows/ci-cd.yml:

name: CI/CD on: pull_request: push: branches: ["main"] permissions: contents: read packages: write jobs: ci: name: CI (test + lint) runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: "20" cache: "npm" - name: Install run: npm ci - name: Lint run: npm run lint - name: Test run: npm test 

This gives you a reliable gate: PRs must pass before merge (enforce it with branch protection rules in GitHub).

Step 4: Build + push Docker image to GitHub Container Registry

Now add a second job that only runs on push to main. It will:

  • Log in to GHCR
  • Build the Docker image
  • Tag it with latest and the commit SHA
  • Push it

Extend the same workflow file:

 build_and_push: name: Build + Push Image runs-on: ubuntu-latest needs: [ci] if: github.event_name == 'push' steps: - name: Checkout uses: actions/checkout@v4 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Log in to GHCR uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build and push uses: docker/build-push-action@v6 with: context: . push: true tags: | ghcr.io/${{ github.repository }}/cicd-demo:latest ghcr.io/${{ github.repository }}/cicd-demo:${{ github.sha }} 

Important: The image name must match how you reference it in docker-compose.yml. Many teams use ghcr.io/<org>/<repo>:<tag>. For simplicity, keep it consistent everywhere.

Step 5: Deploy to your server via SSH + docker compose

There are many deployment options (Kubernetes, ECS, Fly.io, Render, etc.). A very common “first real” setup is a single VM (Ubuntu) running Docker + Docker Compose. The pipeline will connect over SSH and run:

  • docker login to GHCR
  • docker compose pull to fetch the newest image
  • docker compose up -d to apply the update

On your server (once):

  • Install Docker + Compose
  • Create a folder, e.g. /opt/cicd-demo
  • Put your docker-compose.yml there

Then add GitHub repository secrets:

  • DEPLOY_HOST (server IP or hostname)
  • DEPLOY_USER (e.g. ubuntu)
  • DEPLOY_SSH_KEY (private key that can SSH to the server)
  • GHCR_READ_TOKEN (a GitHub token with permission to read packages; you can also use a PAT if needed)

Add a deploy job:

 deploy: name: Deploy runs-on: ubuntu-latest needs: [build_and_push] if: github.event_name == 'push' steps: - name: Deploy over SSH uses: appleboy/[email protected] with: host: ${{ secrets.DEPLOY_HOST }} username: ${{ secrets.DEPLOY_USER }} key: ${{ secrets.DEPLOY_SSH_KEY }} script: | set -euo pipefail cd /opt/cicd-demo echo "${{ secrets.GHCR_READ_TOKEN }}" | docker login ghcr.io -u "${{ github.actor }}" --password-stdin docker compose pull docker compose up -d docker image prune -f 

That’s it: merge to main → tests pass → image is built and pushed → server pulls and restarts the container.

Make it safer: common CI/CD guardrails

Here are practical improvements that prevent “oops” moments:

  • Deploy only from main: already done via workflow trigger.
  • Require PR checks: enforce the ci job in branch protection.
  • Pin action versions: use major versions at minimum (we did). For high security, pin by commit SHA.
  • Least-privilege tokens: use GITHUB_TOKEN where possible; scope PATs narrowly.
  • Immutable tags: in addition to latest, keep SHA tags so you can roll back fast.

Roll back is just changing the image tag in docker-compose.yml to a previous SHA and running docker compose up -d.

Make it debuggable: add a smoke test after deploy

A very simple but powerful move: after deployment, hit /health and fail the job if it’s not OK. Add this to the end of the deploy script (replace with your domain/IP):

 curl -fsS http://localhost/health | grep -q '"ok":true' echo "Smoke test passed" 

If you have HTTPS and a real domain, use that instead of localhost.

Common pitfalls (and how to avoid them)

  • “It works locally, fails in CI”: ensure your tests don’t depend on local environment variables or files you didn’t commit.
  • Docker builds are slow: copy package.json first, use npm ci, and avoid invalidating cache with unnecessary COPY . . early.
  • Deploy job can’t pull from registry: make sure the server can authenticate to GHCR and that the package visibility/permissions are correct.
  • Secrets leakage: never echo secrets; keep “set -x” off; store secrets in GitHub Secrets only.
  • Downtime on restart: for more advanced setups, run two containers behind a reverse proxy, or use a platform that supports rolling deploys.

What you have now (and what to try next)

You now have a complete beginner-friendly CI/CD loop:

  • PR → automatic tests
  • Merge to main → build Docker image → push to registry
  • Deploy → pull image → restart container → optional smoke test

Next steps that level you up without changing the core approach:

  • Add a real linter (ESLint) and fail CI on lint errors
  • Run integration tests using Docker Compose (e.g., app + database)
  • Add release tags (e.g., v1.2.3) and deploy only on releases
  • Send a deployment notification (Slack/Discord) on success/failure

Once you’ve built this once, you can reuse the pattern across most web projects—regardless of whether the code is Node, PHP, Python, or something else. The key is keeping your pipeline boring: small steps, clear gates, and repeatable deploy commands.


Leave a Reply

Your email address will not be published. Required fields are marked *