API Testing with Pytest + httpx: A Practical Pattern for Reliable Integration Tests

API Testing with Pytest + httpx: A Practical Pattern for Reliable Integration Tests

If you’ve ever shipped an API and then broken it with a “small” change, you already know why integration testing matters. Unit tests are great, but they don’t catch miswired routes, auth middleware issues, serialization mismatches, or “works locally” environment surprises.

This guide shows a hands-on, repeatable pattern for API integration testing using pytest + httpx. You’ll learn how to:

  • Test real HTTP behavior (status codes, headers, JSON shape)
  • Keep tests fast with a shared client + clean fixtures
  • Handle auth tokens, seeded data, and idempotent cleanup
  • Run tests locally and in CI with environment-based config

Examples assume a typical JSON REST API (Node, Laravel, FastAPI, etc.). You’ll hit a running server, so this works no matter what your backend is written in.

Project Setup

Create a lightweight test project. In an existing repo, put tests under tests/.

pip install pytest httpx python-dotenv

Recommended structure:

your-repo/ tests/ conftest.py test_health.py test_users.py test_orders.py .env.test pytest.ini 

Create pytest.ini to keep output tidy:

[pytest] addopts = -q testpaths = tests

Create .env.test for test-only config:

API_BASE_URL=http://localhost:8000 [email protected] API_ADMIN_PASSWORD=admin_password 

A Shared HTTP Client Fixture (Fast + Clean)

The biggest beginner mistake is creating a new client per test and repeating base URLs everywhere. Instead, create a single configured client fixture in tests/conftest.py.

import os import pytest import httpx from dotenv import load_dotenv load_dotenv(".env.test") @pytest.fixture(scope="session") def base_url() -> str: url = os.getenv("API_BASE_URL", "http://localhost:8000") return url.rstrip("/") @pytest.fixture(scope="session") def client(base_url: str): # A single client for the whole test session: connection pooling, less overhead with httpx.Client(base_url=base_url, timeout=10.0) as c: yield c

Now every test can call client.get("/route") without reconfiguring anything.

Smoke Test: Health Endpoint

Start with a cheap test that confirms the server is up and responding.

def test_health(client): r = client.get("/health") assert r.status_code == 200 data = r.json() # Keep the assertion flexible but meaningful assert data.get("status") in ("ok", "healthy", "up")

This test is your canary. If it fails, there’s no point debugging deeper tests until the environment is correct.

Auth Once, Reuse Everywhere

Most APIs need auth. A practical pattern is: log in once per test session, then inject the token via a fixture.

Assume your API has a login endpoint like POST /auth/login returning {"access_token": "..."}.

import os import pytest @pytest.fixture(scope="session") def admin_token(client): email = os.getenv("API_ADMIN_EMAIL") password = os.getenv("API_ADMIN_PASSWORD") r = client.post("/auth/login", json={"email": email, "password": password}) assert r.status_code == 200, f"Login failed: {r.status_code} {r.text}" token = r.json().get("access_token") assert token, "No access_token in login response" return token @pytest.fixture() def auth_headers(admin_token): return {"Authorization": f"Bearer {admin_token}"}

This keeps authentication code out of your test bodies, and avoids repeated logins slowing everything down.

Testing a Typical CRUD Flow (Create → Read → Update → Delete)

Let’s test a /users resource. The goal: assert behavior and contract, not implementation.

Example assumptions:

  • POST /users creates a user
  • GET /users/{id} fetches it
  • PATCH /users/{id} updates it
  • DELETE /users/{id} deletes it
import uuid def test_users_crud(client, auth_headers): # 1) Create email = f"test-{uuid.uuid4().hex[:10]}@example.com" payload = {"email": email, "name": "Test User", "role": "viewer"} r = client.post("/users", json=payload, headers=auth_headers) assert r.status_code in (200, 201), r.text created = r.json() assert "id" in created user_id = created["id"] assert created["email"] == email assert created["name"] == "Test User" # 2) Read r = client.get(f"/users/{user_id}", headers=auth_headers) assert r.status_code == 200, r.text fetched = r.json() assert fetched["id"] == user_id assert fetched["email"] == email # 3) Update (PATCH) r = client.patch( f"/users/{user_id}", json={"name": "Updated Name"}, headers=auth_headers, ) assert r.status_code in (200, 204), r.text # Re-fetch to confirm update if API returns 204 r = client.get(f"/users/{user_id}", headers=auth_headers) assert r.status_code == 200 assert r.json()["name"] == "Updated Name" # 4) Delete r = client.delete(f"/users/{user_id}", headers=auth_headers) assert r.status_code in (200, 204), r.text # Confirm deletion r = client.get(f"/users/{user_id}", headers=auth_headers) assert r.status_code in (404, 410)

Why this pattern works well:

  • Uses a unique email so tests don’t collide
  • Validates status codes and response shape
  • Verifies the observable API behavior end-to-end

Contract Checks: Validate JSON Shape Without Extra Libraries

You don’t need a full schema tool to catch common breakages. Add lightweight “shape assertions” that will fail when fields disappear or types change.

def assert_user_shape(u: dict): assert isinstance(u.get("id"), (int, str)) assert isinstance(u.get("email"), str) and "@" in u["email"] assert isinstance(u.get("name"), str) def test_users_list_has_expected_shape(client, auth_headers): r = client.get("/users?limit=5", headers=auth_headers) assert r.status_code == 200 data = r.json() # Some APIs return {"items": [...]} instead of a bare list items = data["items"] if isinstance(data, dict) and "items" in data else data assert isinstance(items, list) for u in items: assert_user_shape(u)

This catches the “frontend expects name but backend renamed it” class of regressions immediately.

Idempotent Cleanup: A Safety Net for Flaky Environments

Sometimes a test fails before deletion, leaving junk records. A practical solution is to clean up based on a predictable marker, like an email prefix.

If your API supports querying by email, add a cleanup helper. Even if you only run it locally, it saves time.

def cleanup_test_users(client, auth_headers, prefix="test-"): # Example endpoint: GET /users?email_prefix=test- r = client.get(f"/users?email_prefix={prefix}", headers=auth_headers) if r.status_code != 200: return data = r.json() items = data["items"] if isinstance(data, dict) and "items" in data else data for u in items: user_id = u.get("id") if user_id: client.delete(f"/users/{user_id}", headers=auth_headers) def test_users_crud_with_cleanup(client, auth_headers): try: # ... run your CRUD test steps here ... pass finally: cleanup_test_users(client, auth_headers)

Use this sparingly—your primary goal is deterministic tests—but it’s a good “seatbelt” when you’re building out coverage.

Testing Error Cases (The Ones That Break Production)

Don’t only test happy paths. A small set of negative tests catches common regressions: validation, auth, and not-found behavior.

def test_create_user_requires_auth(client): r = client.post("/users", json={"email": "[email protected]", "name": "X"}) assert r.status_code in (401, 403) def test_create_user_validates_email(client, auth_headers): r = client.post("/users", json={"email": "not-an-email", "name": "X"}, headers=auth_headers) assert r.status_code in (400, 422) body = r.json() # Flexible check: just ensure we got an error message assert "error" in body or "detail" in body or "errors" in body def test_get_missing_user_is_404(client, auth_headers): r = client.get("/users/does-not-exist", headers=auth_headers) assert r.status_code in (404, 410)

If you only add 5 tests to a new API, make at least 2 of them negative tests.

Running Tests Locally and in CI

Locally:

# Start your API in another terminal, then: pytest

In CI, the key is: ensure the API is running before tests start. Common approaches:

  • Start API via docker-compose, then run pytest in a test container
  • Run API as a background service in the CI job (platform-specific)
  • Add a small “wait for health” step to avoid race conditions

Here’s a tiny Python “wait for health” script you can run before pytest (works in CI and locally):

import os import time import httpx base_url = os.getenv("API_BASE_URL", "http://localhost:8000").rstrip("/") deadline = time.time() + 30 # 30 seconds total while time.time() < deadline: try: r = httpx.get(f"{base_url}/health", timeout=2.0) if r.status_code == 200: print("API is up") raise SystemExit(0) except Exception: pass time.sleep(1) print("API did not become healthy in time") raise SystemExit(1)

Practical Tips to Keep Your API Tests Maintainable

  • Prefer stable assertions. Don’t assert exact error strings unless you must. Check status codes and presence of keys like error/detail.

  • Use unique identifiers. Prefix test data with test- and generate unique emails/IDs to avoid collisions.

  • Keep fixtures small. One fixture for client, one for token, one for auth headers is usually enough.

  • Don’t overmock. These tests are valuable because they’re real HTTP. If you mock everything, you’re back to unit tests.

  • Fail fast on setup. If login fails, stop with a clear message (as shown in the token fixture).

Where to Go Next

Once this baseline is in place, you can level up without rewriting everything:

  • Add a schema validator (OpenAPI-based) to catch subtle contract drift
  • Run the same test suite against staging before deploy
  • Track flaky tests and eliminate hidden dependencies between tests
  • Measure coverage across endpoints by grouping tests per resource

The real win is consistency: a small, repeatable test pattern that your team actually uses. Start with health + auth + one CRUD flow + two negative tests, then expand as you ship.


Leave a Reply

Your email address will not be published. Required fields are marked *