CI/CD Pipelines in Practice: GitHub Actions + Preview Deployments + Safe Production Releases
A good CI/CD pipeline does two things: it catches problems early (CI) and it ships changes safely (CD). For junior/mid devs, the biggest unlock is building a pipeline you can trust: every push runs checks, every pull request gets a preview environment, and production releases are automated with guardrails.
In this hands-on guide, you’ll build a practical GitHub Actions pipeline for a typical web app (frontend + API). You’ll get:
- Fast CI: install, lint, test, build, cache dependencies
- PR preview deployments: a temporary environment per pull request
- Safe production releases: manual approval, health checks, and quick rollback
Examples use Node.js for the app, but the patterns apply to most stacks.
Project layout (example)
Assume a repo like this:
. ├── apps/ │ ├── web/ # frontend │ └── api/ # backend ├── package.json ├── package-lock.json └── .github/ └── workflows/
We’ll use a single workflow that runs on pushes and PRs, plus a dedicated production workflow.
Step 1: Add a solid CI workflow (lint + tests + build)
Create .github/workflows/ci.yml:
name: CI on: pull_request: push: branches: [main] concurrency: group: ci-${{ github.ref }} cancel-in-progress: true jobs: test: runs-on: ubuntu-latest timeout-minutes: 15 steps: - name: Checkout uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: 20 cache: npm - name: Install dependencies run: npm ci - name: Lint run: npm run lint - name: Unit tests run: npm test -- --ci - name: Build run: npm run build
What’s practical here:
npm cigives repeatable installs (usespackage-lock.json).actions/setup-nodewithcache: npmspeeds up installs.concurrencycancels older runs on the same branch so you don’t waste minutes.
Tip: Keep the “build” step in CI even if you also build during deployment—this catches broken builds before merging.
Step 2: Add an integration test job with a database service
If your API uses Postgres/MySQL, run integration tests against a real service container. Here’s a Postgres example added to the same workflow:
integration: runs-on: ubuntu-latest needs: test services: postgres: image: postgres:16 env: POSTGRES_USER: app POSTGRES_PASSWORD: app POSTGRES_DB: app_test ports: - 5432:5432 options: >- --health-cmd="pg_isready -U app" --health-interval=10s --health-timeout=5s --health-retries=5 steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 cache: npm - run: npm ci - name: Wait for DB run: | for i in {1..30}; do if pg_isready -h localhost -p 5432 -U app; then echo "DB is ready"; exit 0 fi sleep 1 done echo "DB did not become ready in time" && exit 1 env: PGPASSWORD: app - name: Run migrations run: npm run db:migrate env: DATABASE_URL: postgres://app:app@localhost:5432/app_test - name: Integration tests run: npm run test:integration env: DATABASE_URL: postgres://app:app@localhost:5432/app_test
This pattern is gold for reliability: your integration tests run in a clean, repeatable environment. If your migrations are slow, consider caching build artifacts, but don’t cache database state—clean is good.
Step 3: PR Preview Deployments (one environment per pull request)
Preview deployments let reviewers click a link and test the feature branch. The exact host depends on your platform (Render, Fly.io, Vercel, Netlify, Kubernetes, etc.). The concept is the same: on every PR, deploy to an isolated “preview” target named after the PR number.
Below is a generic example that deploys using a shell script (you’ll swap the script content for your provider’s CLI).
Create .github/workflows/preview.yml:
name: Preview Deploy on: pull_request: types: [opened, synchronize, reopened, closed] permissions: contents: read pull-requests: write concurrency: group: preview-${{ github.event.pull_request.number }} cancel-in-progress: true jobs: deploy: if: github.event.action != 'closed' runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 cache: npm - run: npm ci - run: npm run build - name: Deploy preview run: ./scripts/deploy-preview.sh env: PR_NUMBER: ${{ github.event.pull_request.number }} GIT_SHA: ${{ github.sha }} DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }} - name: Comment preview URL uses: actions/github-script@v7 with: script: | const pr = context.payload.pull_request.number; const url = `https://preview-${pr}.example.com`; await github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: pr, body: `✅ Preview deployed: ${url}` }); destroy: if: github.event.action == 'closed' runs-on: ubuntu-latest steps: - name: Destroy preview run: ./scripts/destroy-preview.sh env: PR_NUMBER: ${{ github.event.pull_request.number }} DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}
Now add the scripts. Create scripts/deploy-preview.sh:
#!/usr/bin/env bash set -euo pipefail : "${PR_NUMBER:?Missing PR_NUMBER}" : "${DEPLOY_TOKEN:?Missing DEPLOY_TOKEN}" : "${GIT_SHA:?Missing GIT_SHA}" APP_NAME="myapp-preview-${PR_NUMBER}" echo "Deploying ${APP_NAME} at commit ${GIT_SHA}" # Replace the section below with your host/provider commands. # Examples: # - flyctl deploy --app "$APP_NAME" --build-arg GIT_SHA="$GIT_SHA" # - render deploy --service "$APP_NAME" # - kubectl apply -f k8s/overlays/preview --set image.tag="$GIT_SHA" echo "Pretend we deployed using DEPLOY_TOKEN=${DEPLOY_TOKEN:0:4}****"
And scripts/destroy-preview.sh:
#!/usr/bin/env bash set -euo pipefail : "${PR_NUMBER:?Missing PR_NUMBER}" : "${DEPLOY_TOKEN:?Missing DEPLOY_TOKEN}" APP_NAME="myapp-preview-${PR_NUMBER}" echo "Destroying ${APP_NAME}" # Replace with your provider destroy command. # e.g. flyctl apps destroy "$APP_NAME" --yes # e.g. kubectl delete ns "preview-${PR_NUMBER}" echo "Pretend we destroyed ${APP_NAME}"
Make scripts executable:
chmod +x scripts/deploy-preview.sh scripts/destroy-preview.sh
Key idea: tie preview environments to PR number. Always clean them up on PR close to avoid cloud bill surprises.
Step 4: Production releases with guardrails (approval + health check)
For production, you want a controlled workflow: releases only from main, optional manual approval, and verification after deploy.
Create .github/workflows/release.yml:
name: Release on: push: branches: [main] permissions: contents: read jobs: deploy-production: runs-on: ubuntu-latest environment: name: production url: https://app.example.com steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 cache: npm - run: npm ci - run: npm run build - name: Deploy run: ./scripts/deploy-production.sh env: GIT_SHA: ${{ github.sha }} DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }} - name: Health check run: ./scripts/healthcheck.sh env: HEALTHCHECK_URL: https://app.example.com/health
To enable manual approvals, configure the production environment in GitHub repo settings and add required reviewers. The workflow will pause before deployment until someone approves.
Add scripts/deploy-production.sh:
#!/usr/bin/env bash set -euo pipefail : "${DEPLOY_TOKEN:?Missing DEPLOY_TOKEN}" : "${GIT_SHA:?Missing GIT_SHA}" echo "Deploying production commit ${GIT_SHA}" # Replace with provider-specific deploy command. # e.g. flyctl deploy --app myapp-prod --build-arg GIT_SHA="$GIT_SHA" # e.g. helm upgrade --install myapp ./chart --set image.tag="$GIT_SHA" echo "Pretend production deployed."
Add scripts/healthcheck.sh:
#!/usr/bin/env bash set -euo pipefail : "${HEALTHCHECK_URL:?Missing HEALTHCHECK_URL}" echo "Checking ${HEALTHCHECK_URL}" for i in {1..20}; do status=$(curl -s -o /dev/null -w "%{http_code}" "$HEALTHCHECK_URL" || true) if [[ "$status" == "200" ]]; then echo "Healthy ✅" exit 0 fi echo "Not ready yet (status=${status}) - retrying..." sleep 3 done echo "Health check failed ❌" exit 1
That last step is underrated: if production isn’t healthy after deploy, the workflow fails and your team immediately knows.
Step 5: Quick rollback strategy (simple but effective)
Rollback depends on your hosting approach, but a practical baseline is: keep the last known-good release identifier and redeploy it quickly.
A lightweight approach is tagging each production deployment and letting your platform deploy by tag. Here’s a simple GitHub Actions job you can trigger manually:
name: Rollback on: workflow_dispatch: inputs: git_sha: description: "Commit SHA to roll back to" required: true jobs: rollback: runs-on: ubuntu-latest environment: name: production url: https://app.example.com steps: - uses: actions/checkout@v4 - name: Roll back run: ./scripts/deploy-production.sh env: GIT_SHA: ${{ inputs.git_sha }} DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }} - name: Health check run: ./scripts/healthcheck.sh env: HEALTHCHECK_URL: https://app.example.com/health
If you store the deployed SHA somewhere (a release note, a deployment log, or even a file in your infra), rolling back becomes a copy/paste operation instead of a stressful scramble.
Common mistakes (and how to avoid them)
- Slow pipelines: use dependency caching, split jobs, and cancel outdated runs with
concurrency. - CI passes but prod breaks: ensure CI runs the same build command and key tests as prod.
- Preview drift: previews should use the same deployment path as prod, just different config/env names.
- No guardrails: add environment approvals and health checks—small effort, huge safety gain.
- Secrets leaking: never echo secrets; keep them in
secretsand restrict environment access.
What to implement next
Once you have the pipeline above working, the next upgrades that pay off fast are:
- Add a
coveragereport artifact and fail the build if coverage drops too low. - Upload build artifacts (
dist/) to speed up deploy jobs instead of rebuilding. - Add smoke tests against the deployed preview URL (login page loads, key endpoint responds).
- Notify Slack/Teams on deploy success/failure.
With CI checks, preview deployments, production approvals, and health checks in place, you’ll have a pipeline that’s not just “automated,” but actually dependable—something teams can ship with daily.
Leave a Reply