MySQL Optimization in Practice: Make a “Slow List Page” Fast with EXPLAIN, Indexes, and Better Pagination

MySQL Optimization in Practice: Make a “Slow List Page” Fast with EXPLAIN, Indexes, and Better Pagination

Most MySQL performance problems in web apps show up the same way: a list page that was fine in dev becomes painfully slow in production. Maybe you added more filters, the dataset grew, and suddenly “/orders” takes 6–10 seconds.

This article walks through a practical, repeatable workflow to optimize a slow query: measure it, understand it with EXPLAIN, add the right index (not “all the indexes”), and fix common patterns like OFFSET-based pagination. Examples use MySQL 8.x, but most applies to 5.7 too.

Scenario: A Slow Orders List

Imagine an e-commerce app with an orders table and an order_items table. The UI needs:

  • Filter by status
  • Filter by customer
  • Sort by newest
  • Paginate
  • Show total item count per order

Here’s a typical first attempt:

-- Slow query (common first draft) SELECT o.id, o.customer_id, o.status, o.created_at, (SELECT COUNT(*) FROM order_items oi WHERE oi.order_id = o.id) AS items_count FROM orders o WHERE o.status = 'paid' AND o.customer_id = 123 ORDER BY o.created_at DESC LIMIT 20 OFFSET 2000;

This can get slow for two main reasons:

  • OFFSET forces MySQL to scan/skip many rows.
  • The correlated subquery (counting items per row) can execute many times.

Step 1: Get the Table Shapes Right (and Data Types)

Before indexes, make sure columns have sensible types. Example schema:

CREATE TABLE orders ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, customer_id BIGINT UNSIGNED NOT NULL, status VARCHAR(32) NOT NULL, created_at DATETIME(3) NOT NULL, total_cents INT UNSIGNED NOT NULL, -- ... other fields ... KEY idx_orders_customer_status_created (customer_id, status, created_at, id) ) ENGINE=InnoDB; CREATE TABLE order_items ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, order_id BIGINT UNSIGNED NOT NULL, sku VARCHAR(64) NOT NULL, qty INT UNSIGNED NOT NULL, KEY idx_order_items_order_id (order_id), CONSTRAINT fk_order_items_order FOREIGN KEY (order_id) REFERENCES orders(id) ) ENGINE=InnoDB;

Notes:

  • BIGINT UNSIGNED for IDs scales well and indexes efficiently.
  • DATETIME(3) is optional but nice if you store milliseconds.
  • The composite index on orders is deliberate (we’ll explain why next).

Step 2: Read the Query Plan with EXPLAIN

Run:

EXPLAIN SELECT o.id, o.customer_id, o.status, o.created_at, (SELECT COUNT(*) FROM order_items oi WHERE oi.order_id = o.id) AS items_count FROM orders o WHERE o.status = 'paid' AND o.customer_id = 123 ORDER BY o.created_at DESC LIMIT 20 OFFSET 2000;

What you’re looking for:

  • type: ALL is a full scan (bad on big tables). Better is ref, range, or const.
  • key: which index is being used (or NULL if none).
  • rows: estimated rows scanned (huge numbers often explain slowness).
  • Extra: warnings like Using filesort, Using temporary.

If you see no useful index used on orders, MySQL will scan many rows, sort them, then skip OFFSET rows. That’s wasted work.

Step 3: Add a Composite Index That Matches the WHERE + ORDER BY

A common junior mistake is adding separate indexes on each column:

-- Usually NOT ideal as a first move CREATE INDEX idx_orders_customer ON orders(customer_id); CREATE INDEX idx_orders_status ON orders(status); CREATE INDEX idx_orders_created ON orders(created_at);

MySQL can only use one index per table per query block in many cases (it can do index merge sometimes, but don’t rely on it). Your query filters by customer_id and status, then sorts by created_at. A composite index that follows that pattern is often best:

CREATE INDEX idx_orders_customer_status_created ON orders(customer_id, status, created_at, id);

Why include id at the end? It helps make the ordering stable and can help pagination patterns (also keeps the index more “unique-ish”).

Now re-run EXPLAIN. Ideally you’ll see MySQL using idx_orders_customer_status_created and scanning far fewer rows for the first page. But what about OFFSET 2000? That still hurts.

Step 4: Replace OFFSET Pagination with Seek Pagination (Keyset)

LIMIT ... OFFSET ... gets slower as you go deeper because MySQL still has to find and skip earlier rows. A better approach is “seek pagination” (also called keyset pagination): fetch the next page using the last seen values.

Instead of page=101, the client passes a cursor like “created_at + id of the last row”. Query:

-- First page (no cursor yet) SELECT id, customer_id, status, created_at FROM orders WHERE customer_id = 123 AND status = 'paid' ORDER BY created_at DESC, id DESC LIMIT 20;

Then for the next page, the client sends the last row’s created_at and id:

-- Next page using cursor (created_at, id) SELECT id, customer_id, status, created_at FROM orders WHERE customer_id = 123 AND status = 'paid' AND (created_at < '2026-04-13 08:15:10.123' OR (created_at = '2026-04-13 08:15:10.123' AND id < 987654)) ORDER BY created_at DESC, id DESC LIMIT 20;

This stays fast even on deep pages because MySQL can jump into the index near the cursor and scan forward a small amount. That’s why we ordered by created_at, id and included them in the index.

Step 5: Fix the N+1 Counting Problem for item counts

The correlated subquery:

(SELECT COUNT(*) FROM order_items oi WHERE oi.order_id = o.id)

can run once per order row. For 20 rows it might be okay, but if you ever increase limits or join more data, it becomes expensive.

Better: join and aggregate in one pass:

SELECT o.id, o.customer_id, o.status, o.created_at, COUNT(oi.id) AS items_count FROM orders o LEFT JOIN order_items oi ON oi.order_id = o.id WHERE o.customer_id = 123 AND o.status = 'paid' AND (o.created_at < '2026-04-13 08:15:10.123' OR (o.created_at = '2026-04-13 08:15:10.123' AND o.id < 987654)) GROUP BY o.id, o.customer_id, o.status, o.created_at ORDER BY o.created_at DESC, o.id DESC LIMIT 20;

Make sure order_items(order_id) is indexed (we did idx_order_items_order_id). That allows the join to be efficient.

Step 6: Use “Covering Indexes” When You Can

A covering index is when MySQL can answer the query using only the index, without reading the table rows (“back to the clustered index” in InnoDB). It can be a big win on read-heavy endpoints.

Example: if your list page only needs id, status, customer_id, created_at, you can ensure those columns are in the index:

-- Already covers these columns: -- (customer_id, status, created_at, id) -- so the list query can often be index-only.

But don’t blindly add more columns “just in case.” Wider indexes cost more disk and slow writes. Add columns when a specific hot query benefits.

Step 7: Validate Improvements with Real Measurements

Two practical checks:

  • Run the optimized query several times and measure latency in your app (not just locally).
  • Compare query plans before/after and verify the chosen index is used.

Optionally, you can inspect runtime details with:

EXPLAIN ANALYZE SELECT id, customer_id, status, created_at FROM orders WHERE customer_id = 123 AND status = 'paid' ORDER BY created_at DESC, id DESC LIMIT 20;

EXPLAIN ANALYZE (MySQL 8+) shows actual timing and row counts, which is incredibly useful when the optimizer estimates are off.

Quick Checklist for Junior/Mid Devs

  • Start with the query. Don’t tune random server settings until you’ve fixed obvious query/index issues.
  • Index for patterns, not columns. Composite indexes should match WHERE then ORDER BY.
  • Avoid deep OFFSET pagination. Use seek pagination with a stable sort (created_at + id).
  • Kill N+1 at the database level. Prefer joins + aggregation over per-row subqueries.
  • Keep indexes lean. Every new index speeds reads but costs writes and storage.

Putting It Together: A Fast, Production-Friendly List Query

Final version combining the main ideas:

-- Indexes: -- orders(customer_id, status, created_at, id) -- order_items(order_id) SELECT o.id, o.customer_id, o.status, o.created_at, COUNT(oi.id) AS items_count FROM orders o LEFT JOIN order_items oi ON oi.order_id = o.id WHERE o.customer_id = 123 AND o.status = 'paid' AND ( o.created_at < :cursor_created_at OR (o.created_at = :cursor_created_at AND o.id < :cursor_id) ) GROUP BY o.id, o.customer_id, o.status, o.created_at ORDER BY o.created_at DESC, o.id DESC LIMIT 20;

With the right indexes, this pattern is stable under growth: deep navigation stays fast, and the database does predictable work per page.

If you take only one habit from this: whenever a page is slow, run EXPLAIN, then make the index match your filter + sort. It’s the fastest way to go from “mysteriously slow” to “obviously fixed.”


Leave a Reply

Your email address will not be published. Required fields are marked *