What to test (a practical checklist)

Practical API Testing with pytest: From “Happy Path” to Confidence

API bugs are expensive because they often slip past UI checks and show up in production as broken integrations, failed payments, or corrupted data. The goal of API testing isn’t to test “everything”—it’s to build a small, reliable safety net that quickly tells you: did we break the contract?

This hands-on guide shows how to test REST APIs in Python using pytest, with realistic patterns you can copy into a web app today: clean fixtures, positive/negative coverage, schema/contract checks, and CI-friendly execution.

What to test (a practical checklist)

For most teams, the most valuable API tests focus on:

  • Status codes: correct 2xx/4xx/5xx behavior for valid/invalid inputs.
  • Response shape: required fields exist and types make sense.
  • Error contract: errors are consistent (e.g., {"error": {"code": "...", "message": "..."}}).
  • Auth rules: no token → 401, wrong role → 403.
  • Edge cases: empty lists, large values, missing fields, unknown IDs.
  • Idempotency: repeated safe calls don’t mutate state unexpectedly.

Start with 5–15 high-signal tests around your critical endpoints. You can always expand later.

Project setup

Create a minimal test environment:

python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install pytest httpx pydantic

We’ll use httpx for HTTP calls (fast, modern, async-friendly), and pydantic to validate response shapes.

Organize your tests like a grown-up

A clean structure keeps your suite maintainable:

your-repo/ tests/ conftest.py test_users.py test_auth.py requirements-dev.txt

Step 1: A reusable API client fixture

Instead of repeating base URLs, headers, and timeouts in every test, centralize them in conftest.py.

# tests/conftest.py import os import pytest import httpx
@pytest.fixture(scope="session")
def base_url() -> str:
# e.g. "https://api.example.com
" in prod; "http://localhost:8000
" locally
return os.getenv("API_BASE_URL", "http://localhost:8000
")
@pytest.fixture
def client(base_url: str) -> httpx.Client:
# Keep timeouts tight so failures are fast and obvious
with httpx.Client(base_url=base_url, timeout=5.0) as c:
yield c

Run tests against different environments by changing API_BASE_URL:

API_BASE_URL="https://staging.yourapp.com" pytest -q

Step 2: Validate response shape with pydantic

Checking status_code == 200 isn’t enough. If a field disappears or changes type, you want a loud failure. Define a model that matches your API contract.

# tests/test_users.py from typing import Optional from pydantic import BaseModel, EmailStr
class UserDTO(BaseModel):
id: int
email: EmailStr
name: str
is_active: bool
# Optional fields are allowed to be missing or null
avatar_url: Optional[str] = None
def test_get_user_happy_path(client):
res = client.get("/v1/users/123")
assert res.status_code == 200
pgsql
data = res.json()
user = UserDTO.model_validate(data)  # raises if shape/types are wrong

assert user.id == 123
assert user.is_active is True

Now your test fails if:

  • email stops being a valid email
  • id becomes a string
  • is_active disappears

Step 3: Negative tests that catch real regressions

Negative tests are where teams get the most value—because production is full of invalid input.

import pytest
@pytest.mark.parametrize("user_id", [0, -1, 999999999])
def test_get_user_unknown_id_returns_404(client, user_id):
res = client.get(f"/v1/users/{user_id}")
assert res.status_code == 404
lua
body = res.json()
# Keep your error format consistent across endpoints
assert "error" in body
assert "message" in body["error"]

If your API uses 400 for invalid IDs and 404 for missing records, test both. The point is to lock down the behavior clients depend on.

Step 4: Auth testing without pain

Most APIs have endpoints that behave differently for anonymous vs authenticated users. Create a fixture that supplies an auth token.

# tests/conftest.py (append) import os
@pytest.fixture
def auth_headers() -> dict:
token = os.getenv("API_TOKEN", "")
if not token:
pytest.skip("API_TOKEN not set; skipping authenticated tests")
return {"Authorization": f"Bearer {token}"}
# tests/test_auth.py def test_profile_requires_auth(client): res = client.get("/v1/me") assert res.status_code in (401, 403) def test_profile_returns_user_when_authenticated(client, auth_headers): res = client.get("/v1/me", headers=auth_headers) assert res.status_code == 200 body = res.json() assert "id" in body assert "email" in body

This pattern keeps your test suite runnable in any environment while still allowing secure, authenticated checks in CI/staging.

Step 5: Test “write” endpoints safely (and repeatably)

POST/PUT/PATCH tests can get flaky if they depend on existing data. Use a “create-then-cleanup” flow inside the test so it’s self-contained.

# tests/test_users.py def test_create_user_then_delete(client, auth_headers): payload = {"email": "[email protected]", "name": "API Test User"}
pgsql
create = client.post("/v1/users", json=payload, headers=auth_headers)
assert create.status_code == 201
created = create.json()
user_id = created["id"]

get_res = client.get(f"/v1/users/{user_id}", headers=auth_headers)
assert get_res.status_code == 200

delete = client.delete(f"/v1/users/{user_id}", headers=auth_headers)
assert delete.status_code in (200, 204)

# Optional: confirm deletion
after = client.get(f"/v1/users/{user_id}", headers=auth_headers)
assert after.status_code == 404

If your API uses soft deletes, adjust the last assertion accordingly (e.g., is_active == false).

Step 6: Lightweight contract testing with OpenAPI (optional, high value)

If your service publishes an OpenAPI spec (often at /openapi.json), you can validate that responses match what you documented. A simple approach is to assert that the spec endpoint exists and that key paths are present.

# tests/test_contract.py def test_openapi_exists_and_has_users_paths(client): res = client.get("/openapi.json") assert res.status_code == 200 spec = res.json()
assert "paths" in spec
assert "/v1/users/{user_id}" in spec["paths"] or "/v1/users/{id}" in spec["paths"]

This won’t fully validate every response, but it catches common “spec drift” issues (missing endpoints, broken docs generation, wrong base path).

Make tests CI-friendly

Two tips make API suites behave well in CI:

  • Keep timeouts strict (e.g., 3–10 seconds). Slow APIs should fail fast and visibly.
  • Separate fast “smoke” tests from heavier flows using markers.

Add markers to your tests:

# tests/test_users.py import pytest
@pytest.mark.smoke
def test_healthcheck(client):
res = client.get("/health")
assert res.status_code == 200

Run only smoke tests in quick pipelines:

pytest -q -m smoke

Common pitfalls (and how to avoid them)

  • Over-testing implementation details: avoid asserting exact error messages if they change often—assert error codes and structure.
  • Shared state: tests that depend on fixed IDs or seeded data tend to break. Prefer create-and-cleanup flows.
  • Flaky time-dependent behavior: when testing timestamps, accept a reasonable range instead of equality.
  • Noisy failures: when a test fails, print useful context (status + body) so you can debug quickly.

Here’s a tiny helper for better failure output:

# tests/helpers.py def assert_ok(res, expected_status=200): assert res.status_code == expected_status, f"status={res.status_code} body={res.text}"

Next steps

Once you have these basics in place, the highest ROI upgrades are:

  • Schema validation for more endpoints (using pydantic models per resource).
  • Property-based tests (generate lots of inputs) for validation-heavy endpoints.
  • Mock upstream dependencies if your API calls external services, so failures are deterministic.

If you implement just the fixtures + 10 contract-focused tests, you’ll have a suite that catches breaking changes early, documents expected behavior, and helps juniors contribute safely.


Leave a Reply

Your email address will not be published. Required fields are marked *