API Testing in Practice: Contract Checks + Smoke Tests with Pytest and Schemathesis

API Testing in Practice: Contract Checks + Smoke Tests with Pytest and Schemathesis

API testing doesn’t have to mean “write 200 fragile tests.” For most web apps, you’ll get huge confidence by combining:

  • Contract testing: validate responses match your OpenAPI schema
  • Smoke tests: a small set of critical, readable tests (auth, create/read flows)
  • CI-friendly execution: deterministic, fast, and easy to debug

This hands-on guide shows a practical setup using pytest + requests for smoke tests, and schemathesis for automated schema-based checks. It’s ideal for junior/mid devs because you’ll learn how to test APIs like a real project: a few human-written tests + lots of automated coverage.

What we’ll build

We’ll test a sample REST API that supports:

  • POST /auth/login → returns a bearer token
  • POST /projects → creates a project
  • GET /projects/{id} → fetches a project
  • GET /health → basic health check

You can apply the same pattern to any API (FastAPI, Laravel, Express, etc.) as long as it has an OpenAPI spec (or you can generate one).

Project setup

Create a folder and install dependencies:

mkdir api-tests && cd api-tests python -m venv .venv # macOS/Linux: source .venv/bin/activate # Windows: # .venv\Scripts\activate pip install pytest requests python-dotenv schemathesis hypothesis

Recommended structure:

api-tests/ .env pytest.ini tests/ conftest.py test_smoke.py test_contract.py openapi.yaml

Put your base URL and credentials in .env:

BASE_URL=http://localhost:8000 [email protected] TEST_USER_PASSWORD=secret123

Add a tiny OpenAPI spec (example)

If your API already has OpenAPI (Swagger) available (like /openapi.json), you can point Schemathesis to that URL. If not, start with a minimal openapi.yaml (trimmed example):

openapi: 3.0.3 info: title: Example API version: "1.0" servers: - url: http://localhost:8000 paths: /health: get: responses: "200": description: OK content: application/json: schema: type: object properties: status: type: string required: [status] /auth/login: post: requestBody: required: true content: application/json: schema: type: object properties: email: { type: string } password: { type: string } required: [email, password] responses: "200": description: Token content: application/json: schema: type: object properties: access_token: { type: string } token_type: { type: string } required: [access_token, token_type] /projects: post: security: - bearerAuth: [] requestBody: required: true content: application/json: schema: type: object properties: name: { type: string } required: [name] responses: "201": description: Created content: application/json: schema: type: object properties: id: { type: integer } name: { type: string } required: [id, name] /projects/{id}: get: security: - bearerAuth: [] parameters: - in: path name: id required: true schema: { type: integer } responses: "200": description: Project content: application/json: schema: type: object properties: id: { type: integer } name: { type: string } required: [id, name] components: securitySchemes: bearerAuth: type: http scheme: bearer bearerFormat: JWT

Even if your spec is incomplete at first, contract testing will quickly highlight gaps.

Pytest config: sane defaults

Create pytest.ini:

[pytest] addopts = -q testpaths = tests markers = smoke: critical end-to-end checks contract: schema-driven checks

Shared helpers: base URL, auth token, request wrapper

Create tests/conftest.py:

import os import uuid import pytest import requests from dotenv import load_dotenv load_dotenv() BASE_URL = os.getenv("BASE_URL", "http://localhost:8000").rstrip("/") EMAIL = os.getenv("TEST_USER_EMAIL") PASSWORD = os.getenv("TEST_USER_PASSWORD") def api_url(path: str) -> str: if not path.startswith("/"): path = "/" + path return f"{BASE_URL}{path}" @pytest.fixture(scope="session") def session() -> requests.Session: s = requests.Session() # Helpful default headers; add more if needed (like Accept-Language) s.headers.update({"Content-Type": "application/json"}) return s @pytest.fixture(scope="session") def token(session: requests.Session) -> str: if not EMAIL or not PASSWORD: pytest.skip("Missing TEST_USER_EMAIL / TEST_USER_PASSWORD in environment") resp = session.post( api_url("/auth/login"), json={"email": EMAIL, "password": PASSWORD}, timeout=10, ) assert resp.status_code == 200, resp.text data = resp.json() assert "access_token" in data, data return data["access_token"] @pytest.fixture() def auth_headers(token: str) -> dict: return {"Authorization": f"Bearer {token}"} @pytest.fixture() def unique_project_name() -> str: return f"test-project-{uuid.uuid4().hex[:8]}"

Key idea: keep test code clean. Every test should read like a small script, not a pile of setup.

Smoke tests: fast, readable, high-value

Create tests/test_smoke.py:

import pytest import requests from .conftest import api_url @pytest.mark.smoke def test_health(session: requests.Session): resp = session.get(api_url("/health"), timeout=5) assert resp.status_code == 200, resp.text body = resp.json() assert body.get("status") in ("ok", "OK", "healthy"), body @pytest.mark.smoke def test_create_and_fetch_project( session: requests.Session, auth_headers: dict, unique_project_name: str, ): # Create create_resp = session.post( api_url("/projects"), headers=auth_headers, json={"name": unique_project_name}, timeout=10, ) assert create_resp.status_code in (200, 201), create_resp.text created = create_resp.json() assert "id" in created and "name" in created, created assert created["name"] == unique_project_name project_id = created["id"] # Fetch get_resp = session.get( api_url(f"/projects/{project_id}"), headers=auth_headers, timeout=10, ) assert get_resp.status_code == 200, get_resp.text fetched = get_resp.json() assert fetched["id"] == project_id assert fetched["name"] == unique_project_name

Two smoke tests can catch a surprising amount of breakage: routing, auth, DB migrations, serialization, and permissions.

Contract testing: automatically validate your API against OpenAPI

Schemathesis generates test cases from your OpenAPI schema and checks that responses conform to it. Think of it like “fuzzing, but focused on your API spec.”

Create tests/test_contract.py:

import os import pytest import schemathesis import requests from dotenv import load_dotenv load_dotenv() BASE_URL = os.getenv("BASE_URL", "http://localhost:8000").rstrip("/") EMAIL = os.getenv("TEST_USER_EMAIL") PASSWORD = os.getenv("TEST_USER_PASSWORD") # Option A: Load from a local OpenAPI file SCHEMA = schemathesis.from_path("openapi.yaml", base_url=BASE_URL) def get_token() -> str | None: if not EMAIL or not PASSWORD: return None resp = requests.post( f"{BASE_URL}/auth/login", json={"email": EMAIL, "password": PASSWORD}, timeout=10, ) if resp.status_code != 200: return None return resp.json().get("access_token") TOKEN = get_token() def auth_header() -> dict: if not TOKEN: return {} return {"Authorization": f"Bearer {TOKEN}"} @pytest.mark.contract @SCHEMA.parametrize() def test_openapi_contract(case): # Add auth header when available; public endpoints will ignore it response = case.call(headers=auth_header(), timeout=10) # Validate status code + response schema vs OpenAPI case.validate_response(response)

Run it:

pytest -m contract

If your API returns a field that’s missing from the schema (or has a different type), you’ll get a clear failure pointing to the mismatch. This is how teams keep clients and servers aligned without manually writing endless assertions.

Make contract tests stable: limit scope and tune data generation

Schema-driven tests can hit endpoints in “weird” ways (that’s the point), but you still want actionable failures. Here are practical tips:

  • Start small: mark only a subset of endpoints in the OpenAPI spec until you trust the setup.
  • Constrain payloads: add minLength, maxLength, format, and enums in your schema to generate realistic inputs.
  • Handle auth properly: use security schemes and ensure your contract tests attach headers (like we did).
  • Skip destructive endpoints (e.g., DELETE /users) in contract tests, or run them in an isolated test environment.

If you need to skip some endpoints quickly, Schemathesis supports filtering. A simple approach is to create a smaller schema file for contract tests at first.

Run everything locally

Run smoke tests:

pytest -m smoke

Run all tests:

pytest

Tip: if your API is slow/flaky in dev, add retries only around known transient calls (like a cold start), not everywhere. Blanket retries hide real problems.

CI example: GitHub Actions (minimal)

Here’s a small workflow that installs deps and runs tests. Save as .github/workflows/api-tests.yml:

name: API Tests on: push: pull_request: jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: "3.11" - name: Install deps run: | python -m pip install --upgrade pip pip install pytest requests python-dotenv schemathesis hypothesis - name: Run tests env: BASE_URL: ${{ secrets.BASE_URL }} TEST_USER_EMAIL: ${{ secrets.TEST_USER_EMAIL }} TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }} run: | pytest

In a real app, you’d likely spin up the API + DB inside CI (Docker Compose) and set BASE_URL to that service. But even without that, this shows the pattern: keep secrets out of the repo and let CI run the same commands you run locally.

Common pitfalls (and how to avoid them)

  • Tests depend on ordering: smoke tests should create their own data (like unique_project_name), not rely on pre-seeded records.
  • “Works on my machine” auth: always fetch a real token in tests. Avoid hardcoding JWTs.
  • Schema drift: if contract tests fail often, fix the schema or the API immediately. Drift compounds and breaks clients.
  • Ignoring timeouts: always set timeout on HTTP requests. Hanging tests waste CI minutes.
  • Over-testing trivial endpoints: write smoke tests for critical flows, not every CRUD route. Let contract tests cover breadth.

Next steps

Once this is working, you can level it up without adding much complexity:

  • Add pytest-xdist to run tests in parallel when your API can handle it.
  • Publish an HTML test report (e.g., pytest-html) for easy CI review.
  • Introduce “test environments” with seeded fixtures and a clean DB per run.
  • If you own both client and server, consider consumer-driven contract tools too—but start with OpenAPI validation first.

With one small smoke suite and automated schema checks, you’ll catch breaking changes early, keep your API honest, and ship with confidence—without drowning in brittle tests.


Leave a Reply

Your email address will not be published. Required fields are marked *