MySQL Optimization in Practice: Fix Slow Queries with EXPLAIN, Better Indexes, and a Few Smart Schema Tweaks
Most “MySQL is slow” problems aren’t MySQL problems—they’re query + index problems. The good news: you can usually get 5–50× speedups without adding servers or rewriting your whole app.
This hands-on guide shows a practical workflow for junior/mid developers:
- Reproduce a slow query
- Read
EXPLAIN(and know what to look for) - Add the right indexes (including composite indexes)
- Avoid common “index killers” (functions on columns, leading wildcards, etc.)
- Use generated columns for searchable derived values
- Validate improvements with real measurements
Setup: A realistic table and a slow query
Imagine an e-commerce app with an orders table. You frequently list paid orders for a tenant (store) in a date range, sorted by newest.
CREATE TABLE orders ( id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT, store_id BIGINT UNSIGNED NOT NULL, status ENUM('pending','paid','shipped','canceled') NOT NULL, total_cents INT UNSIGNED NOT NULL, created_at DATETIME NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB;
Here’s a common query pattern:
SELECT id, total_cents, created_at, email FROM orders WHERE store_id = 42 AND status = 'paid' AND created_at >= '2026-01-01' AND created_at < '2026-02-01' ORDER BY created_at DESC LIMIT 50;
If the table has millions of rows and you only have the primary key, MySQL will often scan far more data than needed.
Step 1: Use EXPLAIN to see what MySQL is doing
Run EXPLAIN (or EXPLAIN ANALYZE if available in your MySQL version):
EXPLAIN SELECT id, total_cents, created_at, email FROM orders WHERE store_id = 42 AND status = 'paid' AND created_at >= '2026-01-01' AND created_at < '2026-02-01' ORDER BY created_at DESC LIMIT 50;
What to look for in the output:
type:ALLusually means full table scan (bad for big tables). Better:ref,range,const.key: which index is used. If it’sNULL, no index is used.rows: rough number of rows MySQL expects to examine. Big numbers are a red flag.Extra: warnings likeUsing filesort(sorting without index) orUsing temporary(temp table).
If you see type=ALL, key=NULL, and a huge rows estimate, you have an indexing problem.
Step 2: Add a composite index that matches the query
The query filters by store_id, status, and a range on created_at, then sorts by created_at DESC. A strong starting index is:
ALTER TABLE orders ADD INDEX idx_orders_store_status_created (store_id, status, created_at);
Why this order?
- Put equality filters first:
store_id,status - Put the range column next:
created_at - The
ORDER BY created_atcan often be satisfied by the index (reducing sorting work)
Now re-run EXPLAIN. You want to see key=idx_orders_store_status_created, type=range or ref, and a much smaller rows estimate.
Step 3: Avoid “index killers” in WHERE clauses
Indexes can’t help much if MySQL can’t use them. Here are common patterns that disable efficient index usage.
-
Functions on indexed columns
Bad:
WHERE DATE(created_at) = '2026-01-15'Good (range query):
WHERE created_at >= '2026-01-15' AND created_at < '2026-01-16' -
Leading wildcard LIKE
Bad for normal B-tree indexes:
WHERE email LIKE '%@gmail.com'Better options include generated columns (next section), storing domain separately, or using full-text/search tooling depending on needs.
-
Mismatched types
If
store_idis numeric, don’t compare it to strings:WHERE store_id = '42' -- avoidUse:
WHERE store_id = 42
Step 4: Use a generated column to index derived values (email domain example)
Say you want “orders from gmail.com” per store. A naive query might do:
SELECT COUNT(*) FROM orders WHERE store_id = 42 AND SUBSTRING_INDEX(email, '@', -1) = 'gmail.com';
This applies a function to email, so a normal index on email won’t help much. Instead, create a generated column for the email domain and index it.
ALTER TABLE orders ADD COLUMN email_domain VARCHAR(255) GENERATED ALWAYS AS (SUBSTRING_INDEX(email, '@', -1)) STORED, ADD INDEX idx_orders_store_domain (store_id, email_domain);
Now the query becomes index-friendly:
SELECT COUNT(*) FROM orders WHERE store_id = 42 AND email_domain = 'gmail.com';
Tip: Prefer STORED generated columns if you need to index them reliably.
Step 5: Use covering indexes carefully (reduce table lookups)
Even with a good index, MySQL may still need to fetch rows from the table (“back to the table”) to retrieve columns not in the index. Sometimes you can speed up hot endpoints by making an index “cover” the query (include the selected columns).
For our listing query:
SELECT id, total_cents, created_at, email FROM orders WHERE store_id = 42 AND status = 'paid' AND created_at >= '2026-01-01' AND created_at < '2026-02-01' ORDER BY created_at DESC LIMIT 50;
A covering index might be:
ALTER TABLE orders ADD INDEX idx_orders_list_cover ( store_id, status, created_at, id, total_cents, email );
Tradeoffs: covering indexes are bigger, slower to update, and can increase write costs. Use them only for truly hot queries, and keep them as small as practical.
Step 6: Measure improvements with real timing (not vibes)
Use EXPLAIN ANALYZE when available to see actual execution time and row counts:
EXPLAIN ANALYZE SELECT id, total_cents, created_at, email FROM orders WHERE store_id = 42 AND status = 'paid' AND created_at >= '2026-01-01' AND created_at < '2026-02-01' ORDER BY created_at DESC LIMIT 50;
Also test using your app’s code path. Here’s a minimal Python example using mysql-connector-python that times a query:
import time import mysql.connector conn = mysql.connector.connect( host="127.0.0.1", user="app", password="secret", database="shop", ) cur = conn.cursor(dictionary=True) sql = """ SELECT id, total_cents, created_at, email FROM orders WHERE store_id = %s AND status = %s AND created_at >= %s AND created_at < %s ORDER BY created_at DESC LIMIT 50 """ params = (42, "paid", "2026-01-01", "2026-02-01") start = time.perf_counter() cur.execute(sql, params) rows = cur.fetchall() elapsed_ms = (time.perf_counter() - start) * 1000 print(f"Fetched {len(rows)} rows in {elapsed_ms:.2f}ms") cur.close() conn.close()
Run this before and after adding indexes. You’re looking for a meaningful drop in latency and stable performance under repeated runs.
Step 7: A quick checklist for day-to-day MySQL tuning
- Start with the query: can you reduce rows early with better filters?
- Use
EXPLAIN: confirm whether you’re scanning too many rows. - Index for your WHERE + ORDER BY: equality columns first, then range, then sorting.
- Avoid functions on filtered columns: rewrite as ranges or use generated columns.
- Be cautious with covering indexes: great for hot reads, costly for writes.
- Measure: use
EXPLAIN ANALYZEand app-level timing.
Wrap-up: The “one-hour win” approach
If you take one habit from this article, make it this: every time you ship a new endpoint that reads from MySQL, run the query with EXPLAIN and ask, “How many rows will this examine in production?” Then design a composite index that matches your access pattern.
That simple loop—EXPLAIN → index → verify—solves the majority of real-world performance issues you’ll hit as a web developer.
Leave a Reply