MySQL Optimization for Web Apps: Make Queries Fast with EXPLAIN, Indexes, and Better Pagination
Slow pages often come down to a few slow queries. The good news: you can usually get huge wins in MySQL without changing your entire stack—just by measuring properly, adding the right indexes, and writing queries that let MySQL do less work.
This hands-on guide walks through a practical workflow you can use on real web apps (junior/mid-friendly): find the slow query, inspect the plan with EXPLAIN, add an index (or rewrite the query), and verify the improvement.
1) Start with Measurement: Find the Slow Queries
Before changing anything, identify the queries that actually hurt. In production, you typically use APM (Datadog/New Relic) or MySQL’s slow query log. In local/staging, you can still do a lot with built-in tools.
- Enable the slow query log (staging first if you’re unsure).
- Look for high query time and high rows examined.
- Fix the top few queries—don’t prematurely optimize everything.
-- Check if slow query logging is enabled SHOW VARIABLES LIKE 'slow_query_log'; SHOW VARIABLES LIKE 'long_query_time'; -- (Requires appropriate privileges) Enable slow query log (example) SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 0.5; -- log queries slower than 500ms
If you can’t change server settings, still capture query timings at the app layer, and copy the slow SQL into your SQL client for analysis.
2) Read the Query Plan with EXPLAIN (and What to Look For)
EXPLAIN tells you how MySQL plans to execute your query: which index it will use, how many rows it expects to scan, and whether it’s doing expensive operations like sorting or temporary tables.
EXPLAIN SELECT id, email FROM users WHERE status = 'active' ORDER BY created_at DESC LIMIT 50;
When you inspect the output, focus on:
type: better is generallyconst,ref,range; worse isALL(full table scan).key: which index is used (ifNULL, no index used).rows: estimated rows scanned (smaller is usually better).Extra: watch forUsing filesort,Using temporary.
If you see type=ALL and a huge rows estimate, you likely need an index or a query rewrite.
3) Add the Right Index: Match WHERE + ORDER BY
A common slow query pattern is filtering with WHERE and then sorting with ORDER BY. MySQL performs best when a single index can support both.
Example: list active users, newest first:
SELECT id, email, created_at FROM users WHERE status = 'active' ORDER BY created_at DESC LIMIT 50;
Helpful index (composite):
CREATE INDEX idx_users_status_created_at ON users (status, created_at DESC);
Why this works:
statusis first because it’s used inWHERE.created_atis next because it’s used for ordering.- MySQL can scan the index in order and stop after
LIMIT 50—no big sort needed.
Rule of thumb: put the most selective equality filters first (like status, tenant_id), then range/sort columns (like created_at).
4) Avoid “SELECT *”: Use Covering Indexes for Hot Endpoints
If a query selects only columns that are already present in an index, MySQL may be able to answer it directly from the index (“covering index”), avoiding extra lookups in the table.
Suppose your homepage shows only id, title, published_at for posts:
SELECT id, title, published_at FROM posts WHERE status = 'published' ORDER BY published_at DESC LIMIT 20;
Create an index that covers the query:
CREATE INDEX idx_posts_status_published_at_cover ON posts (status, published_at DESC, id, title);
This can reduce disk reads significantly on high-traffic pages. Don’t overdo it—covering indexes increase index size and write cost. Use them for your hottest read paths.
5) Fix Pagination: Replace OFFSET with Keyset Pagination
LIMIT 20 OFFSET 20000 looks innocent but can be very expensive: MySQL must still find and skip those 20,000 rows before returning 20.
Instead, use keyset pagination (a.k.a. “seek method”) based on your sort key (often id or a timestamp).
Bad (slow at high offsets):
SELECT id, title FROM posts WHERE status = 'published' ORDER BY published_at DESC LIMIT 20 OFFSET 20000;
Good (fast, stable): pass the last seen published_at (and id as a tiebreaker) from the previous page.
-- First page SELECT id, title, published_at FROM posts WHERE status = 'published' ORDER BY published_at DESC, id DESC LIMIT 20; -- Next page: use the last row from previous page SELECT id, title, published_at FROM posts WHERE status = 'published' AND (published_at < :last_published_at OR (published_at = :last_published_at AND id < :last_id)) ORDER BY published_at DESC, id DESC LIMIT 20;
Add an index that matches the filter + sort:
CREATE INDEX idx_posts_status_pub_id ON posts (status, published_at DESC, id DESC);
Keyset pagination keeps performance consistent even on deep pages.
6) Kill the N+1 Pattern with JOINs (or IN) the Right Way
A classic web slowdown: fetch 20 posts, then fetch each author in a separate query (21 queries total). Prefer a single query with a JOIN.
N+1 (avoid):
-- Query 1 SELECT id, user_id, title FROM posts WHERE status='published' LIMIT 20;
-- Queries 2..21
SELECT id, name FROM users WHERE id = ?;
Better: one query
SELECT p.id, p.title, u.id AS author_id, u.name AS author_name FROM posts p JOIN users u ON u.id = p.user_id WHERE p.status = 'published' ORDER BY p.published_at DESC LIMIT 20;
Indexes that help:
posts(status, published_at)for filtering + sortingposts(user_id)for the join (often already present as a foreign key index)users(id)is typically the primary key
7) Make Your Queries Safer and Faster with Proper Parameterization
Parameterization prevents SQL injection and helps MySQL cache execution plans more effectively in many setups. Here’s a small, working Python example using mysql-connector-python (similar patterns apply in PHP/Node/Java).
import mysql.connector conn = mysql.connector.connect( host="127.0.0.1", user="appuser", password="apppass", database="appdb", ) sql = """ SELECT id, title, published_at FROM posts WHERE status = %s ORDER BY published_at DESC LIMIT %s """ with conn.cursor(dictionary=True) as cur: cur.execute(sql, ("published", 20)) rows = cur.fetchall() for r in rows: print(r["id"], r["title"], r["published_at"]) conn.close()
Keep the SQL simple, indexed, and predictable. Avoid generating wildly different query shapes in loops.
8) Common Anti-Patterns (and Better Alternatives)
-
Filtering on expressions:
WHERE DATE(created_at) = '2026-04-01'prevents index use.Use a range instead:
WHERE created_at >= '2026-04-01 00:00:00' AND created_at < '2026-04-02 00:00:00' -
Leading wildcard LIKE:
WHERE name LIKE '%john%'can’t use a normal index efficiently.Consider full-text search (
FULLTEXT) or a search service if needed. -
Huge IN lists:
WHERE id IN (...thousands...)can be slow and bloats queries.Insert IDs into a temp table (or pass a joinable set) and
JOINinstead for large batches.
9) A Simple Optimization Checklist You Can Reuse
- Copy the slow SQL and run
EXPLAIN. - Check for
type=ALL, highrows, andUsing filesort/Using temporary. - Add/adjust a composite index that matches
WHERE+ORDER BY. - Reduce selected columns (avoid
SELECT *on hot endpoints). - Replace deep
OFFSETpagination with keyset pagination. - Fix N+1 with
JOINor batch queries. - Re-test and compare timings (before/after).
If you apply just these steps to your top 3 slow endpoints, you’ll usually see noticeable improvements—often 10×—without changing frameworks or servers. Optimize the queries your users actually hit, and MySQL will feel a lot faster with surprisingly small changes.
Leave a Reply