MySQL Optimization for Web Apps: EXPLAIN, Indexes, and Pagination That Doesn’t Melt

MySQL Optimization for Web Apps: EXPLAIN, Indexes, and Pagination That Doesn’t Melt

When a web app starts feeling “slow,” the culprit is often a handful of database queries that were fine in dev data, but fall apart in production. The good news: you can get big wins in MySQL with a repeatable workflow—EXPLAIN, the right indexes, and query patterns that scale.

This article is hands-on: you’ll create a small schema, reproduce common performance problems, then fix them using practical MySQL techniques junior/mid developers can apply immediately.

1) Setup: a realistic table and data patterns

Let’s assume a typical “projects / tasks” style app. Most slow queries come from filtering + sorting large tables.

CREATE TABLE users ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, email VARCHAR(255) NOT NULL UNIQUE, created_at DATETIME NOT NULL ) ENGINE=InnoDB; CREATE TABLE tasks ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, user_id BIGINT UNSIGNED NOT NULL, status ENUM('todo','doing','done') NOT NULL, priority TINYINT NOT NULL, title VARCHAR(255) NOT NULL, created_at DATETIME NOT NULL, updated_at DATETIME NOT NULL, FOREIGN KEY (user_id) REFERENCES users(id) ) ENGINE=InnoDB;

Common app queries:

  • List tasks for a user filtered by status, sorted by updated_at
  • Search tasks by title prefix
  • Paginate task lists
  • Count tasks per status

2) Your first tool: EXPLAIN (read the plan before guessing)

Suppose your app runs this query on every page load:

SELECT id, title, status, updated_at FROM tasks WHERE user_id = 42 AND status = 'doing' ORDER BY updated_at DESC LIMIT 20;

Run EXPLAIN to see what MySQL will do:

EXPLAIN SELECT id, title, status, updated_at FROM tasks WHERE user_id = 42 AND status = 'doing' ORDER BY updated_at DESC LIMIT 20;

Key fields to pay attention to in the output:

  • type: access method. Worse to better (roughly): ALL (full scan), index, range, ref, const.
  • key: which index is used (if any).
  • rows: estimated rows scanned.
  • Extra: red flags like Using filesort and Using temporary.

If you see type=ALL and a large rows estimate, MySQL is scanning too much. If you see Using filesort on a big table, sorting is happening the slow way (not necessarily “bad,” but often fixable).

3) Composite indexes: match your WHERE + ORDER BY

For the query above, you filter by user_id and status, then sort by updated_at. A single-column index on user_id is often not enough; MySQL may still scan many rows for that user, then filter/sort.

Create a composite index that supports both filtering and ordering:

CREATE INDEX idx_tasks_user_status_updated ON tasks (user_id, status, updated_at DESC);

Now rerun EXPLAIN. You want to see:

  • key=idx_tasks_user_status_updated
  • type=ref or range (not ALL)
  • Fewer estimated rows
  • Ideally no Using filesort for this pattern

Rule of thumb: a composite index should typically follow this order:

  • Columns tested with equality in WHERE first (e.g., user_id, status)
  • Then columns used for range or ordering (e.g., updated_at)

Why? MySQL can quickly narrow down to the exact “slice” of the B-Tree for the equality columns, then read rows already ordered by the next index column.

4) Covering indexes: speed reads by avoiding extra table lookups

Even with the right index, MySQL might still need to “go back to the table” to fetch columns not in the index. A covering index includes all columns needed for the query, letting MySQL return results directly from the index.

For our list page, we select id, title, status, updated_at. You can extend the index to cover these:

DROP INDEX idx_tasks_user_status_updated ON tasks; CREATE INDEX idx_tasks_list_cover ON tasks (user_id, status, updated_at DESC, id, title);

Whether this is worth it depends on read/write balance. Covering indexes can greatly speed hot reads but increase index size and write cost. Use them for endpoints that are hit constantly.

5) Pagination: avoid OFFSET for deep pages (use keyset pagination)

The classic pattern:

-- page 3000, 20 per page SELECT id, title, updated_at FROM tasks WHERE user_id = 42 ORDER BY updated_at DESC LIMIT 20 OFFSET 59980;

This gets slower as OFFSET grows because MySQL still has to walk past the skipped rows.

Keyset pagination (a.k.a. “seek method”) uses the last item from the previous page:

-- first page SELECT id, title, updated_at FROM tasks WHERE user_id = 42 ORDER BY updated_at DESC, id DESC LIMIT 20; -- next page: use the last row's updated_at and id as a cursor SELECT id, title, updated_at FROM tasks WHERE user_id = 42 AND (updated_at < '2026-03-14 10:15:00' OR (updated_at = '2026-03-14 10:15:00' AND id < 987654)) ORDER BY updated_at DESC, id DESC LIMIT 20;

Note the tie-breaker on id. Without it, rows with the same updated_at can cause duplicates or gaps.

Support this with an index that matches the filter + order:

CREATE INDEX idx_tasks_user_updated_id ON tasks (user_id, updated_at DESC, id DESC);

In your API response, return a cursor token (e.g., last updated_at and id) rather than a page number.

6) Searching: don’t accidentally disable indexes

A common bug: LIKE patterns that start with a wildcard can’t use a normal B-Tree index efficiently.

-- slow on big tables: SELECT id, title FROM tasks WHERE user_id = 42 AND title LIKE '%report%';

If you can constrain the UX to prefix search, the index can help:

CREATE INDEX idx_tasks_user_title ON tasks (user_id, title); -- prefix search uses the index better: SELECT id, title FROM tasks WHERE user_id = 42 AND title LIKE 'report%';

If you truly need “contains” search, consider MySQL full-text indexes (or a search engine). For full-text:

ALTER TABLE tasks ADD FULLTEXT INDEX ft_tasks_title (title); SELECT id, title FROM tasks WHERE user_id = 42 AND MATCH(title) AGAINST('report' IN NATURAL LANGUAGE MODE);

Full-text has its own tradeoffs (tokenization, language rules, relevance). Test with your content.

7) Counting efficiently: avoid scanning the whole table

Dashboards love counts:

SELECT status, COUNT(*) AS cnt FROM tasks WHERE user_id = 42 GROUP BY status;

This can be fine with the right index. Add:

CREATE INDEX idx_tasks_user_status ON tasks (user_id, status);

Now MySQL can group/count within the indexed range for that user more efficiently.

If you need a global counter across millions of rows, consider pre-aggregating into a separate table updated by your application or via scheduled jobs (common pattern for analytics dashboards).

8) Practical checklist: what to do when a query is slow

  • Capture the exact query (including parameters) from logs/APM.
  • Run EXPLAIN and note type, rows, and Extra.
  • Add/adjust indexes to match the query’s WHERE + ORDER BY.
  • Prefer keyset pagination over large OFFSET values.
  • Watch for anti-patterns:
    • Functions on indexed columns in WHERE (e.g., DATE(created_at) = ...)
    • LIKE '%...%' on large text fields
    • Sorting huge result sets without a supporting index
  • Measure: compare query time and scanned rows before/after.

9) Bonus: a tiny “before/after” you can apply today

If you only remember one thing, make it this: your index should reflect how you filter and sort.

Before:

SELECT id, title FROM tasks WHERE user_id = ? ORDER BY updated_at DESC LIMIT 20;

After: add an index aligned with the query:

CREATE INDEX idx_tasks_user_updated ON tasks (user_id, updated_at DESC); -- same query, faster on large tables SELECT id, title FROM tasks WHERE user_id = ? ORDER BY updated_at DESC LIMIT 20;

That one index often turns a “scan + sort” into a quick “read the top of the right slice.”

Wrap-up

MySQL optimization isn’t magic—it’s matching your queries to data access patterns MySQL can execute efficiently. Use EXPLAIN to see what’s happening, add composite indexes that match your filters and ordering, and switch from OFFSET pagination to keyset pagination when lists get deep.

If you want to take this further, the next practical step is enabling slow query logging in a non-prod environment (or using an APM), then iterating on the top 5 slowest queries with the workflow you used here.


Leave a Reply

Your email address will not be published. Required fields are marked *