CI/CD Pipelines in Practice: Build, Test, Scan, and Deploy a Web App with GitHub Actions + Docker
If you’ve ever merged a “small change” and then spent an hour rolling back production, you already understand why CI/CD matters. A practical pipeline does four things consistently:
- Build the same artifact you’ll run in prod (often a Docker image).
- Test it (unit + integration).
- Check quality/security (lint, vulnerabilities).
- Deploy it safely (without leaking secrets or breaking users).
This article shows a hands-on, junior-friendly CI/CD setup using GitHub Actions that builds a Docker image, runs tests, performs a simple vulnerability scan, publishes the image, and then deploys to a server via SSH. The examples are minimal but real—and you can paste them into a repo and iterate.
What we’re building
We’ll assume a simple web API (Node.js + Express) packaged as a Docker image. The pipeline will:
- Run on every push + pull request
- Install dependencies and run tests
- Build a Docker image (with cache)
- Scan the image for known vulnerabilities
- Push the image to GitHub Container Registry (GHCR)
- Deploy on
mainby pulling the image on a VPS and restarting a container
If you’re using Python, PHP, or another stack, the pipeline structure stays the same—you’ll just swap the test command.
Repo layout
. ├── src/ │ └── server.js ├── test/ │ └── health.test.js ├── package.json ├── package-lock.json ├── Dockerfile └── .github/ └── workflows/ └── ci-cd.yml
Step 1: Minimal app + tests
src/server.js:
const express = require("express"); const app = express(); app.use(express.json()); app.get("/health", (req, res) => { res.json({ ok: true, ts: Date.now() }); }); const port = process.env.PORT || 3000; // Export app for tests if (require.main === module) { app.listen(port, () => console.log(`Listening on ${port}`)); } module.exports = app;
package.json (using Jest + supertest):
{ "name": "ci-cd-demo", "version": "1.0.0", "main": "src/server.js", "scripts": { "start": "node src/server.js", "test": "jest --runInBand" }, "dependencies": { "express": "^4.19.2" }, "devDependencies": { "jest": "^29.7.0", "supertest": "^6.3.4" } }
test/health.test.js:
const request = require("supertest"); const app = require("../src/server"); describe("GET /health", () => { it("returns ok=true", async () => { const res = await request(app).get("/health"); expect(res.statusCode).toBe(200); expect(res.body.ok).toBe(true); expect(typeof res.body.ts).toBe("number"); }); });
Step 2: Dockerfile that matches production
This Dockerfile uses a small base image and installs dependencies cleanly. Keep it predictable: CI should build the same way your laptop does.
# Dockerfile FROM node:20-alpine WORKDIR /app # Install deps first (better cache) COPY package.json package-lock.json ./ RUN npm ci --omit=dev # Copy the source COPY src ./src ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "src/server.js"]
Note: We’re using npm ci --omit=dev for production image size. Our CI tests will run outside the container (faster iteration), but you can also run tests inside a “test stage” if you prefer.
Step 3: CI workflow (tests + build)
Create .github/workflows/ci-cd.yml. This workflow runs on pull requests and pushes. It tests, builds, and (only on main) publishes and deploys.
name: CI/CD on: push: branches: ["**"] pull_request: permissions: contents: read packages: write jobs: test: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: "20" cache: "npm" - name: Install deps run: npm ci - name: Run tests run: npm test build-and-push: needs: [test] runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to GHCR uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Build and push image uses: docker/build-push-action@v6 with: context: . push: true tags: | ghcr.io/${{ github.repository }}/ci-cd-demo:latest ghcr.io/${{ github.repository }}/ci-cd-demo:${{ github.sha }} cache-from: type=gha cache-to: type=gha,mode=max
needs: [test]ensures we only build if tests pass.if: github.ref == 'refs/heads/main'ensures we only publish images frommain.cache-to/cache-fromspeeds up builds across runs.
Step 4: Add a basic vulnerability scan
A production pipeline should catch known CVEs before deployment. One common tool is Trivy. We’ll scan the image after building it.
Add this step after “Build and push image” (or scan before pushing—both are fine; scanning after push is simplest here):
- name: Scan image for vulnerabilities (Trivy) uses: aquasecurity/[email protected] with: image-ref: ghcr.io/${{ github.repository }}/ci-cd-demo:${{ github.sha }} format: table exit-code: "1" ignore-unfixed: true vuln-type: "os,library" severity: "CRITICAL,HIGH"
Why this configuration? For junior/mid teams, failing on CRITICAL,HIGH is a reasonable starting point. You can loosen it initially (exit-code 0) and tighten later once you’ve fixed baseline issues.
Step 5: Deploy via SSH (simple and practical)
Now we’ll deploy to a VPS. This example assumes you already have:
- A Linux server with Docker installed
- Ports open as needed (e.g., 80/443 behind a reverse proxy, or 3000 directly)
- A deploy user that can run Docker (often via the
dockergroup)
Add a deploy job that runs only on main after build-and-push. It will SSH into your server, pull the image, and restart a container.
deploy: needs: [build-and-push] runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' steps: - name: Deploy over SSH uses: appleboy/[email protected] with: host: ${{ secrets.DEPLOY_HOST }} username: ${{ secrets.DEPLOY_USER }} key: ${{ secrets.DEPLOY_SSH_KEY }} script: | set -euo pipefail IMAGE="ghcr.io/${{ github.repository }}/ci-cd-demo:${{ github.sha }}" CONTAINER="ci-cd-demo" docker login ghcr.io -u "${{ github.actor }}" -p "${{ secrets.GITHUB_TOKEN }}" docker pull "$IMAGE" if docker ps -a --format '{{.Names}}' | grep -q "^${CONTAINER}$"; then docker rm -f "$CONTAINER" fi docker run -d \ --name "$CONTAINER" \ --restart unless-stopped \ -p 3000:3000 \ -e PORT=3000 \ "$IMAGE" docker image prune -f
Secrets to set in GitHub (Settings → Secrets and variables → Actions):
DEPLOY_HOST: your server IP or hostnameDEPLOY_USER: e.g.,deployDEPLOY_SSH_KEY: a private key (create a dedicated deploy key)
Tip: If you use a reverse proxy (Nginx, Caddy, Traefik), map the container to an internal port and let the proxy handle TLS and routing.
Common pipeline mistakes (and how to avoid them)
-
Deploying on every branch. Keep deployments locked to
main(or release tags). Useif:guards. -
Testing the wrong thing. If your build artifact is a Docker image, consider adding a smoke test that runs the container and hits
/health. -
Leaking secrets in logs. Never echo secrets. Prefer GitHub Secrets and minimal output. Avoid printing environment dumps.
-
No rollback plan. Tag images with
${{ github.sha }}(we did). Rollback becomes: “deploy the previous SHA.” -
Slow builds. Use dependency caching (Node cache) and Docker Buildx cache (
type=gha).
Optional: Add a container smoke test in CI
This catches “it builds but doesn’t run” issues. Add this to the build-and-push job before pushing (or in a separate job):
- name: Build image (local) uses: docker/build-push-action@v6 with: context: . push: false load: true tags: local/ci-cd-demo:test cache-from: type=gha cache-to: type=gha,mode=max - name: Smoke test container run: | set -e docker run -d --rm -p 3000:3000 --name smoke local/ci-cd-demo:test for i in 1 2 3 4 5; do if curl -fsS http://localhost:3000/health >/dev/null; then echo "Smoke test OK" docker stop smoke exit 0 fi sleep 1 done echo "Smoke test failed" docker logs smoke || true docker stop smoke || true exit 1
Wrap-up: A pipeline you can actually ship
You now have a CI/CD pipeline that’s practical for real projects:
- Tests run on every PR
- Main branch builds a Docker image with cache
- A vulnerability scan blocks risky releases
- Deployments pull immutable images by SHA
From here, the most valuable upgrades are: environment-specific deploys (staging → prod), database migration steps, and “deploy only after approval” using GitHub Environments. But even without those, this setup is a solid baseline that keeps you from shipping broken builds—and keeps deployments boring (the best kind).
Leave a Reply