MySQL Optimization for Busy Web Apps: EXPLAIN, Indexes, and Query Patterns You Can Apply Today

MySQL Optimization for Busy Web Apps: EXPLAIN, Indexes, and Query Patterns You Can Apply Today

If your web app “feels slow,” the database is often the bottleneck—especially once tables grow past a few hundred thousand rows. The good news: you can get big wins in MySQL without becoming a DBA. This hands-on guide walks through a practical workflow: measure, inspect with EXPLAIN, add the right indexes, and rewrite a few common query patterns.

Examples use MySQL 8.x and InnoDB.

1) Start with a realistic table and problem query

Let’s say you have an orders table and an order listing endpoint:

CREATE TABLE orders ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, user_id BIGINT UNSIGNED NOT NULL, status ENUM('pending','paid','shipped','cancelled') NOT NULL, total_cents INT NOT NULL, created_at DATETIME NOT NULL, notes TEXT, KEY idx_user_id (user_id) ) ENGINE=InnoDB;

The endpoint supports filtering by user_id, status, and sorting by newest first:

SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status IN ('paid', 'shipped') ORDER BY created_at DESC LIMIT 50;

This looks fine… until a user has millions of orders (or your query pattern is similar in other tables). Time to inspect.

2) Use EXPLAIN (and read only the columns that matter)

Run:

EXPLAIN SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status IN ('paid', 'shipped') ORDER BY created_at DESC LIMIT 50;

Key things to look at:

  • type: prefer ref/range over ALL (full table scan).
  • key: which index is chosen (or NULL if none).
  • rows: estimated rows MySQL will examine (lower is better).
  • Extra: warnings like Using filesort or Using temporary.

If you only have idx_user_id (user_id), MySQL might filter by user_id but still sort a large set by created_at (watch for Using filesort and a high rows estimate).

3) Build the right composite index (match WHERE + ORDER BY)

A common rule of thumb: put equality conditions first, then the column(s) used for sorting/range. Here the query filters by user_id (equality) and status (small set), and sorts by created_at.

Add a composite index:

ALTER TABLE orders ADD KEY idx_user_status_created (user_id, status, created_at DESC);

Now run EXPLAIN again. You’re aiming for:

  • MySQL using idx_user_status_created
  • lower rows
  • no Using filesort (or at least much less work)

Why it helps: MySQL can walk the index in the desired order and stop after 50 rows (because of LIMIT), instead of sorting a huge intermediate result.

4) Make indexes “covering” for hot read paths

If your query selects only columns found in the index (plus the primary key), MySQL may avoid extra lookups into the table (“back to the clustered index” in InnoDB). This can be a big win on high-traffic endpoints.

For our query we need id, status, total_cents, created_at. We already have user_id, status, created_at in the index, and InnoDB secondary indexes include the primary key (id) implicitly. We’re missing total_cents.

If this endpoint is extremely hot, consider extending the index:

ALTER TABLE orders DROP KEY idx_user_status_created, ADD KEY idx_user_status_created_cover (user_id, status, created_at DESC, total_cents);

Tradeoff: bigger indexes cost disk and slow down writes. Use covering indexes only for truly frequent queries.

5) Fix the top 3 slow query patterns

Pattern A: Leading wildcard LIKE

This is not index-friendly:

SELECT id FROM orders WHERE notes LIKE '%refund%';

Options:

  • Use FULLTEXT search for text columns.
  • Store search tokens separately (or use an external search engine) if needs grow.
ALTER TABLE orders ADD FULLTEXT KEY ft_notes (notes); SELECT id FROM orders WHERE MATCH(notes) AGAINST ('refund' IN NATURAL LANGUAGE MODE);

Pattern B: Functions on indexed columns

This prevents index use:

SELECT COUNT(*) FROM orders WHERE DATE(created_at) = '2026-02-23';

Rewrite to a range query:

SELECT COUNT(*) FROM orders WHERE created_at >= '2026-02-23 00:00:00' AND created_at < '2026-02-24 00:00:00';

And index created_at (or include it in a composite index used by your main filters).

Pattern C: OFFSET pagination on large tables

This gets slower as you paginate deeper:

SELECT id, created_at FROM orders WHERE user_id = 42 ORDER BY created_at DESC LIMIT 50 OFFSET 50000;

Prefer “seek” pagination (a.k.a. keyset pagination). Pass the last seen created_at + id from the previous page:

-- First page SELECT id, created_at, status, total_cents FROM orders WHERE user_id = 42 ORDER BY created_at DESC, id DESC LIMIT 50; -- Next page (using the last row from previous page) SELECT id, created_at, status, total_cents FROM orders WHERE user_id = 42 AND (created_at < '2026-02-23 12:34:56' OR (created_at = '2026-02-23 12:34:56' AND id < 123456)) ORDER BY created_at DESC, id DESC LIMIT 50;

Create an index that matches this order:

ALTER TABLE orders ADD KEY idx_user_created_id (user_id, created_at DESC, id DESC);

6) Measure before/after with ANALYZE and profiling basics

After adding indexes or rewriting queries:

  • Run ANALYZE TABLE orders; to refresh stats (useful in dev/staging).
  • Compare timings with representative data sizes (not a tiny local dataset).

In MySQL 8, you can also use EXPLAIN ANALYZE to get execution details:

EXPLAIN ANALYZE SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status IN ('paid', 'shipped') ORDER BY created_at DESC LIMIT 50;

This helps confirm whether MySQL is actually doing fewer reads and avoiding big sorts.

7) Turn on the slow query log (the cheapest “observability” win)

In production, you want to catch slow queries you didn’t expect. Enable the slow query log (example settings; adjust to your environment):

-- my.cnf [mysqld] slow_query_log = 1 slow_query_log_file = /var/log/mysql/slow.log long_query_time = 0.5 log_queries_not_using_indexes = 0

Then periodically review the slow log with tooling like pt-query-digest (Percona Toolkit) or your hosting provider’s query insights.

8) A small “safe query” application example (prepared statements)

Optimization isn’t only about speed—correctness matters too. Always use prepared statements so MySQL can reuse plans and you avoid injection.

Example in Node.js (mysql2):

import mysql from "mysql2/promise"; const pool = mysql.createPool({ host: process.env.DB_HOST, user: process.env.DB_USER, database: process.env.DB_NAME, password: process.env.DB_PASS, connectionLimit: 10, }); export async function listOrders(userId, statuses, limit = 50) { const placeholders = statuses.map(() => "?").join(", "); const sql = ` SELECT id, status, total_cents, created_at FROM orders WHERE user_id = ? AND status IN (${placeholders}) ORDER BY created_at DESC LIMIT ? `; const params = [userId, ...statuses, limit]; const [rows] = await pool.execute(sql, params); return rows; }

This query matches the composite index we added earlier, and it’s safe.

9) Quick checklist you can reuse on any endpoint

  • Get the query your endpoint actually runs (copy from logs/APM).
  • Run EXPLAIN and look for type, rows, and Extra.
  • Index for access pattern: equality filters first, then range/sort columns.
  • Avoid OFFSET deep pagination; use keyset pagination.
  • Don’t wrap indexed columns in functions; rewrite to ranges.
  • Use covering indexes only for truly hot read paths.
  • Enable slow query logging and iterate on the top offenders.

Wrap-up

If you take only one thing from this: treat performance like a loop—measure, inspect with EXPLAIN, change one thing, and measure again. Composite indexes aligned with your WHERE and ORDER BY can turn “seconds” into “milliseconds,” and keyset pagination prevents performance from degrading as your app scales.


Leave a Reply

Your email address will not be published. Required fields are marked *