MySQL Optimization You Can Apply Today: EXPLAIN, Indexes, and Safer Query Patterns
If your app feels “randomly slow,” MySQL is often doing exactly what you asked—just not what you meant. The fastest wins usually come from (1) indexing the right columns, (2) writing queries that can actually use those indexes, and (3) verifying behavior with EXPLAIN. This article walks through a hands-on workflow junior/mid developers can use to make queries consistently fast.
Setup: A Realistic Example Schema
Let’s pretend we’re building an e-commerce admin where you search orders and list recent ones.
CREATE TABLE customers ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, email VARCHAR(255) NOT NULL, created_at DATETIME NOT NULL, UNIQUE KEY uniq_customers_email (email) ) ENGINE=InnoDB; CREATE TABLE orders ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, customer_id BIGINT UNSIGNED NOT NULL, status ENUM('pending','paid','shipped','cancelled') NOT NULL, total_cents INT UNSIGNED NOT NULL, created_at DATETIME NOT NULL, notes TEXT, CONSTRAINT fk_orders_customer FOREIGN KEY (customer_id) REFERENCES customers(id) ) ENGINE=InnoDB;
Now imagine this common endpoint:
- List “paid” orders from the last 30 days
- Sort newest first
- Paginate
Step 1: Use EXPLAIN Before You Guess
Here’s a query that looks fine but can be slow at scale:
SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = 'paid' AND created_at >= NOW() - INTERVAL 30 DAY ORDER BY created_at DESC LIMIT 50;
Check how MySQL plans to run it:
EXPLAIN SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = 'paid' AND created_at >= NOW() - INTERVAL 30 DAY ORDER BY created_at DESC LIMIT 50;
What you’re looking for in the EXPLAIN output:
type: better is usuallyconst,ref,range. Worse isALL(full table scan).key: which index it picked (if any).rows: how many rows it expects to scan.Extra: flags likeUsing filesort(often means a costly sort) orUsing temporary.
If you see type=ALL and a huge rows estimate, you need an index (or a better query).
Step 2: Add the Right Composite Index
We filter by status and created_at, and we order by created_at. A composite index that matches the filter and sort pattern is often the best move:
ALTER TABLE orders ADD INDEX idx_orders_status_created (status, created_at);
Run EXPLAIN again. Ideally:
- It uses
idx_orders_status_created - It scans far fewer rows
- Sorting cost drops (you may no longer see
Using filesortdepending on query shape and MySQL version)
Why this order? MySQL can use the left-most prefix of an index efficiently. Since we have equality on status and a range on created_at, (status, created_at) is a common pattern that performs well.
Step 3: Avoid “Index-Blocking” Query Patterns
Indexes don’t help if your query prevents MySQL from using them efficiently. These are common footguns:
-
Functions on indexed columns:
-- Bad: wraps the column SELECT * FROM orders WHERE DATE(created_at) = '2026-03-01';Rewrite to a range so the index can be used:
-- Good: range condition SELECT * FROM orders WHERE created_at >= '2026-03-01 00:00:00' AND created_at < '2026-03-02 00:00:00'; -
Leading wildcard LIKE (can’t use a normal B-tree index):
-- Slow at scale SELECT * FROM customers WHERE email LIKE '%@example.com';Prefer exact matches (already indexed via
UNIQUE(email)), or add a separate searchable column, or consider full-text search for content fields. -
Implicit conversions (mismatched types):
-- If customer_id is BIGINT but you pass a string inconsistently, the optimizer may behave poorly SELECT * FROM orders WHERE customer_id = '123';Pass properly typed parameters from your application layer.
Step 4: Fix Pagination (Offset Gets Slower Over Time)
Offset pagination is easy but degrades as OFFSET grows because MySQL still has to walk past skipped rows:
-- Works, but gets slower on deep pages SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = 'paid' ORDER BY created_at DESC LIMIT 50 OFFSET 5000;
Use keyset pagination instead. For “newest first,” paginate using the last seen created_at (and a tiebreaker like id):
-- First page SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = 'paid' ORDER BY created_at DESC, id DESC LIMIT 50; -- Next page (use values from the last row of previous page) SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = 'paid' AND (created_at < :last_created_at OR (created_at = :last_created_at AND id < :last_id)) ORDER BY created_at DESC, id DESC LIMIT 50;
To support that ordering efficiently, extend the index:
ALTER TABLE orders DROP INDEX idx_orders_status_created, ADD INDEX idx_orders_status_created_id (status, created_at, id);
This keeps pagination consistently fast, even on “page 10,000.”
Step 5: Reduce Data Reads With Covering Indexes
If your query only needs a few columns, you can sometimes design an index that “covers” the query, meaning MySQL can return results from the index without fetching full rows from the table (fewer disk/page reads).
Example: a lightweight list view needs id, customer_id, total_cents, created_at:
ALTER TABLE orders ADD INDEX idx_orders_list_cover (status, created_at, id, customer_id, total_cents);
Now your list query may become “index-only” for many cases:
SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = 'paid' ORDER BY created_at DESC, id DESC LIMIT 50;
Rule of thumb: don’t create giant covering indexes for everything. They speed reads but slow writes and consume space. Use them for hot endpoints.
Step 6: Joins: Index Both Sides and Select Less
A common admin query joins customers and orders:
SELECT o.id, o.total_cents, o.created_at, c.email FROM orders o JOIN customers c ON c.id = o.customer_id WHERE o.status = 'paid' ORDER BY o.created_at DESC LIMIT 50;
Make sure:
orders.customer_idis indexed (often implied by your composite indexes, but verify)- You aren’t doing
SELECT *(pullingnotesTEXT for a list page is wasted I/O)
If you frequently filter by customer and status together, consider:
ALTER TABLE orders ADD INDEX idx_orders_customer_status_created (customer_id, status, created_at);
Index design depends on your most common filters and sort order—optimize for what the app actually does.
Step 7: Add Observability: Slow Query Log + Targeted Fixes
Don’t optimize blind. In production-like environments, enable the slow query log and set a reasonable threshold (example: 200ms–500ms) so you capture queries that users actually feel.
-- Check current settings SHOW VARIABLES LIKE 'slow_query_log'; SHOW VARIABLES LIKE 'long_query_time'; -- Enable (often done in config, shown here for learning) SET GLOBAL slow_query_log = 'ON'; SET GLOBAL long_query_time = 0.2;
Then take the worst offenders and apply the workflow:
- Capture the query (and real parameters if possible)
- Run
EXPLAIN - Add/adjust indexes or rewrite the query
- Re-check with
EXPLAINand measure again
Practical Example: Before/After in an Application
Here’s a minimal Python example using parameterized queries (works similarly in Node, PHP, etc.). The important part is the SQL shape and keyset pagination.
import mysql.connector from datetime import datetime conn = mysql.connector.connect( host="127.0.0.1", user="app", password="secret", database="shop", ) def fetch_paid_orders_page(page_size=50, last_created_at=None, last_id=None): cur = conn.cursor(dictionary=True) base = """ SELECT id, customer_id, total_cents, created_at FROM orders WHERE status = %s """ params = ["paid"] if last_created_at is not None and last_id is not None: base += """ AND (created_at < %s OR (created_at = %s AND id < %s)) """ params.extend([last_created_at, last_created_at, last_id]) base += """ ORDER BY created_at DESC, id DESC LIMIT %s """ params.append(page_size) cur.execute(base, params) return cur.fetchall() first_page = fetch_paid_orders_page() last = first_page[-1] next_page = fetch_paid_orders_page( last_created_at=last["created_at"], last_id=last["id"] )
Pair this with INDEX(status, created_at, id) and the query stays fast as your dataset grows.
Checklist You Can Reuse
-
Run
EXPLAINon slow queries. Look fortype,key,rows,Extra. -
Add composite indexes that match your
WHERE+ORDER BYpatterns. -
Rewrite queries to avoid functions on indexed columns and leading-wildcard searches.
-
Use keyset pagination for large lists (avoid large
OFFSET). -
Consider covering indexes only for the hottest read paths.
-
Enable slow query logging and iterate based on real traffic.
Once you internalize the “EXPLAIN → index/query change → re-check” loop, MySQL performance becomes a repeatable engineering task instead of a mystery.
Leave a Reply