MySQL Optimization for Busy Apps: Find Slow Queries, Fix Indexes, and Prove It with EXPLAIN

MySQL Optimization for Busy Apps: Find Slow Queries, Fix Indexes, and Prove It with EXPLAIN

If your app “randomly” gets slow under load, MySQL is often the bottleneck—not because MySQL is bad, but because a few queries are doing far more work than you think. This guide shows a practical workflow you can use in real projects: identify slow queries, understand why they’re slow, add the right indexes, and verify the improvement using EXPLAIN and simple timing.

All examples use MySQL 8.x syntax, but most of it applies to 5.7 as well.

1) Start with a Real Problem: Capture the Slow Queries

Before optimizing anything, you need evidence. Two common options:

  • Slow query log: best for production-like environments.
  • Performance Schema / sys schema: good for digging into aggregate patterns.

Enable the slow query log (dev/staging first):

-- Check current state SHOW VARIABLES LIKE 'slow_query_log'; SHOW VARIABLES LIKE 'long_query_time'; -- Enable it (requires proper privileges) SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 0.2; -- log queries slower than 200ms -- Optional: log queries that don't use indexes SET GLOBAL log_queries_not_using_indexes = 'ON'; -- Confirm file location SHOW VARIABLES LIKE 'slow_query_log_file'; 

Once you have a few slow queries, pick one that runs frequently and impacts a key user flow (list pages, dashboards, background jobs).

2) A Common Offender: Filtering + Sorting Without a Helpful Index

Imagine a typical e-commerce table:

CREATE TABLE orders ( id BIGINT PRIMARY KEY AUTO_INCREMENT, user_id BIGINT NOT NULL, status VARCHAR(20) NOT NULL, total_cents INT NOT NULL, created_at DATETIME NOT NULL, KEY idx_user_id (user_id) ); 

Now your app runs this query constantly:

SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 50; 

You already have idx_user_id, so you might assume it’s fast. But filtering by status and sorting by created_at changes the game.

Use EXPLAIN to see what MySQL is doing:

EXPLAIN SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 50; 

Red flags to look for in the output:

  • type = ALL (full table scan)
  • High rows estimate
  • Using filesort (sorting without an index)
  • Using temporary (extra work, often for grouping/sorting)

3) Fix It with a Composite Index (in the Right Order)

For this query, an index that supports both filtering and ordering is ideal:

ALTER TABLE orders ADD KEY idx_user_status_created (user_id, status, created_at); 

Why this order?

  • user_id and status are used in the WHERE clause (equality filters).
  • created_at is used in ORDER BY ... DESC. MySQL can traverse an index backward to satisfy descending order in many cases.

Re-run EXPLAIN. A good sign is type = ref (or range) and the absence of Using filesort for this query.

Pro tip: Don’t add indexes “just in case.” Each index slows down writes (INSERT/UPDATE/DELETE) and consumes memory/disk.

4) Covering Indexes: Make MySQL Avoid Extra Reads

If your query selects only columns already in the index, MySQL can often serve results directly from the index (a “covering index”), reducing I/O.

Suppose you frequently query a lightweight list:

SELECT user_id, status, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 50; 

Your index (user_id, status, created_at) covers everything needed—great. In EXPLAIN, look for Using index in the Extra column.

If you also need total_cents and you’re willing to expand the index, you can include it at the end:

ALTER TABLE orders ADD KEY idx_user_status_created_total (user_id, status, created_at, total_cents); 

Only do this if it’s truly hot-path and measured. Wider indexes cost more on writes and disk.

5) Pagination That Scales: Stop Using Deep OFFSET

Classic pagination is easy, but it gets slower as the page number grows:

-- Page 2000? MySQL still has to walk/skips many rows. SELECT id, created_at FROM orders WHERE user_id = 42 ORDER BY created_at DESC LIMIT 50 OFFSET 99950; 

Better: keyset pagination (a.k.a. “seek method”). You page using the last seen values:

-- First page SELECT id, created_at FROM orders WHERE user_id = 42 ORDER BY created_at DESC, id DESC LIMIT 50; -- Next page (use the last row from previous page) SELECT id, created_at FROM orders WHERE user_id = 42 AND (created_at < '2026-02-01 10:30:00' OR (created_at = '2026-02-01 10:30:00' AND id < 123456)) ORDER BY created_at DESC, id DESC LIMIT 50; 

For this pattern, index it like:

ALTER TABLE orders ADD KEY idx_user_created_id (user_id, created_at, id); 

This approach stays fast even when users scroll deep.

6) Avoid “Functions on Columns” in WHERE Clauses

This is a sneaky performance killer:

-- Bad: wraps the column in a function, often prevents index use SELECT COUNT(*) FROM orders WHERE DATE(created_at) = '2026-02-01'; 

Instead, use a range that preserves index usage:

-- Good: index-friendly range query SELECT COUNT(*) FROM orders WHERE created_at >= '2026-02-01 00:00:00' AND created_at < '2026-02-02 00:00:00'; 

Pair it with an index on created_at (or a composite index that begins with your primary filter columns).

7) Measure Improvements (Don’t Guess)

After changes, verify with both EXPLAIN and timing. In a dev/staging environment you can do:

SET profiling = 1; SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 50; SHOW PROFILES; 

MySQL’s profiling features vary by version/config, but the point remains: collect numbers before/after.

In production, rely on slow query log, application metrics (p95/p99 latency), and database monitoring dashboards.

8) A Practical Optimization Checklist

  • Start with evidence: slow query log or monitoring to identify top offenders.
  • Use EXPLAIN: look for scans, filesorts, temp tables, and huge row estimates.
  • Create composite indexes that match the query: equality filters first, then range/order columns.
  • Prefer keyset pagination for deep scrolling and large datasets.
  • Avoid functions on indexed columns in WHERE clauses; use ranges.
  • Keep indexes intentional: every index is a write penalty and storage cost.
  • Re-test: confirm improved plans and real latency reduction.

9) One More Real-World Tip: Add the Index Safely

On large tables, adding an index can be disruptive. In MySQL 8, many index operations are “online-ish,” but the impact depends on storage engine, table size, workload, and options. Always test on staging and consider maintenance windows for big schema changes.

If you’re using a migration tool, make sure it supports large-table index creation strategies (and that your rollout plan includes rollback/monitoring).

Wrap-Up

MySQL optimization isn’t magic: it’s a repeatable loop—find slow queries, inspect execution plans, add the right indexes (not the most indexes), and verify with measurements. If you build the habit of running EXPLAIN whenever you introduce a new “list/filter/sort” feature, you’ll prevent most performance fires before they happen.


Leave a Reply

Your email address will not be published. Required fields are marked *