CI/CD Pipelines You Can Actually Ship: A Practical GitHub Actions Blueprint (Tests, Lint, Docker, Deploy)

CI/CD Pipelines You Can Actually Ship: A Practical GitHub Actions Blueprint (Tests, Lint, Docker, Deploy)

CI/CD (Continuous Integration / Continuous Delivery) is just an automated checklist that runs every time you push code. For junior/mid developers, the goal isn’t “enterprise-grade everything” — it’s getting to a pipeline that:

  • Blocks broken code from merging (CI).
  • Builds a reproducible artifact (usually a Docker image).
  • Optionally deploys to a staging/production environment (CD) with safe guardrails.

This article walks you through a hands-on pattern you can copy into most web apps (Node, Python, PHP, etc.). We’ll use GitHub Actions, but the concepts map to any CI system.

What We’re Building

A pipeline triggered on every pull request and main-branch push:

  • lint: fast style + static checks
  • test: unit/integration tests
  • build: build & push Docker image (only on main)
  • deploy: deploy to a server via SSH (only on main)

Key idea: keep PR checks fast; run slower steps only after merge.

Repository Setup (Minimal)

We’ll assume your repo has:

  • Tests runnable with a single command (e.g., npm test or pytest).
  • A linter/formatter (e.g., eslint, ruff, phpcs).
  • A Dockerfile (we’ll provide a sample).

Put GitHub workflow files under .github/workflows/.

Step 1: A Solid Dockerfile (Because Your CI Needs Reproducibility)

Even if you don’t deploy containers today, building in CI forces you to “make it reproducible.” Here’s a practical Node example (swap for your stack as needed):

# Dockerfile FROM node:20-alpine AS deps WORKDIR /app COPY package*.json ./ RUN npm ci FROM node:20-alpine AS runner WORKDIR /app ENV NODE_ENV=production COPY --from=deps /app/node_modules ./node_modules COPY . . EXPOSE 3000 CMD ["node", "server.js"]

Notes:

  • npm ci is deterministic and faster in CI than npm install.
  • Separate dependency install from source copy to improve Docker layer caching.

Step 2: The CI Workflow (Lint + Test on Pull Requests)

Create .github/workflows/ci.yml:

name: CI on: pull_request: push: branches: [ "main" ] jobs: lint: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Use Node uses: actions/setup-node@v4 with: node-version: "20" cache: "npm" - name: Install run: npm ci - name: Lint run: npm run lint test: runs-on: ubuntu-latest needs: [ lint ] steps: - uses: actions/checkout@v4 - name: Use Node uses: actions/setup-node@v4 with: node-version: "20" cache: "npm" - name: Install run: npm ci - name: Test run: npm test

Why this structure works:

  • needs: [ lint ] makes tests run only after lint passes.
  • Node caching speeds up repeated installs.
  • PRs get fast feedback and predictable gates.

If you’re on Python, the same idea looks like:

# example steps (swap into the jobs above) - uses: actions/setup-python@v5 with: python-version: "3.12" cache: "pip" - run: pip install -r requirements.txt -r requirements-dev.txt - run: ruff check . - run: pytest -q

Step 3: Build & Push a Docker Image (Only on main)

Now add a second workflow: .github/workflows/release.yml. This will build and push an image after merges to main.

First, add repository secrets:

  • GHCR_TOKEN (a GitHub token with package write permissions) or use the built-in GITHUB_TOKEN.
name: Release on: push: branches: [ "main" ] permissions: contents: read packages: write jobs: build_and_push: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Login to GHCR uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build and push uses: docker/build-push-action@v6 with: context: . push: true tags: | ghcr.io/${{ github.repository }}:latest ghcr.io/${{ github.repository }}:${{ github.sha }}

This produces two tags:

  • :latest for “current main”
  • :<sha> for immutable rollbacks

Step 4: A Straightforward Deploy (SSH + docker compose)

This is a pragmatic pattern for small teams: your server pulls the new image and restarts the service with docker compose. You’ll need these GitHub secrets:

  • DEPLOY_HOST (server IP/hostname)
  • DEPLOY_USER (e.g., ubuntu)
  • DEPLOY_SSH_KEY (private key for SSH)
  • APP_DIR (path on server, e.g., /home/ubuntu/app)

On the server, you’ll have a docker-compose.yml like:

# docker-compose.yml (on the server) services: web: image: ghcr.io/OWNER/REPO:latest ports: - "80:3000" environment: - NODE_ENV=production restart: unless-stopped

Add a deploy job to release.yml:

 deploy: runs-on: ubuntu-latest needs: [ build_and_push ] steps: - name: Deploy over SSH uses: appleboy/[email protected] with: host: ${{ secrets.DEPLOY_HOST }} username: ${{ secrets.DEPLOY_USER }} key: ${{ secrets.DEPLOY_SSH_KEY }} script: | set -e cd "${{ secrets.APP_DIR }}" docker compose pull docker compose up -d docker image prune -f

That’s it: merge to main → build image → server pulls latest → restarts.

Make It Safer: Guardrails You Should Add Early

Once it works, harden it a bit without overengineering:

  • Deploy only from protected branches: protect main in GitHub settings so only PR merges can reach it.
  • Require CI checks: make lint and test required before merging.
  • Use environments: GitHub Environments can require manual approval for production deploys.

Example: require approval for production by using an environment:

 deploy: environment: production runs-on: ubuntu-latest needs: [ build_and_push ] steps: # same steps as above...

Speed Tips That Matter in Real Projects

  • Fail fast: lint first, tests second, build last.
  • Split test types: run unit tests on PR, run slower integration/e2e after merge or nightly.
  • Cache dependencies: use built-in caching (setup-node cache, setup-python cache, etc.).
  • Pin versions: use exact Node/Python versions so CI matches dev.
  • Tag immutable releases: keep :sha tags so rollback is “deploy previous SHA.”

Debugging CI Like a Pro (Without Guessing)

When pipelines fail, avoid “try random changes.” Do this instead:

  • Reproduce locally: run the same commands CI runs (npm ci, npm test) in a clean environment.
  • Print versions: add steps like node -v or python --version.
  • Log critical env vars: never print secrets, but do log non-sensitive config.
  • Use set -e in deploy scripts so failures stop immediately.

A tiny “debug step” example:

- name: Debug info run: | node -v npm -v ls -la

Where to Go Next

If you have this baseline running, you’re already ahead of many projects. Next upgrades (pick one at a time):

  • Add a staging environment that auto-deploys on main, and keep production manual approval.
  • Run database migrations as a separate deploy step (carefully, with backups).
  • Publish build artifacts (bundles, coverage reports) for easy review.
  • Add a scheduled workflow for security updates and dependency audits.

Copy the workflows above, tweak commands to match your stack, and you’ll have a CI/CD pipeline that’s practical, understandable, and shippable.


Leave a Reply

Your email address will not be published. Required fields are marked *