MySQL Optimization for Web Apps: Indexes, EXPLAIN, and Fixing Slow Queries (Hands-On)

MySQL Optimization for Web Apps: Indexes, EXPLAIN, and Fixing Slow Queries (Hands-On)

When a web app feels “randomly slow,” the culprit is often a single database query doing too much work: scanning too many rows, sorting huge result sets, or joining without proper indexes. The good news: you can usually get massive wins with a few repeatable techniques—without rewriting your whole app.

This guide walks through practical MySQL optimization for junior/mid developers: how to spot slow queries, read EXPLAIN, design effective indexes, and avoid common performance traps. All examples are copy/paste-friendly.

Setup: A Realistic Table to Optimize

Imagine an e-commerce app with an orders table:

CREATE TABLE orders ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, user_id BIGINT UNSIGNED NOT NULL, status ENUM('pending','paid','shipped','canceled') NOT NULL, total_cents INT UNSIGNED NOT NULL, created_at DATETIME NOT NULL, INDEX idx_user_id (user_id) ) ENGINE=InnoDB;

And a typical query used in a dashboard:

SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 20;

This looks harmless, but on a large dataset it can still be slow if MySQL can’t efficiently filter and sort.

Step 1: Confirm the Problem (Slow Query Log + Quick Timing)

Before optimizing, confirm what’s slow. In production you’ll often use APM tools, but MySQL’s slow query log is a strong baseline.

Enable slow query logging (example values; adjust for your environment):

-- In MySQL config (my.cnf / my.ini), then restart: slow_query_log = 1 slow_query_log_file = /var/log/mysql/slow.log long_query_time = 0.5 log_queries_not_using_indexes = 1

For local testing, measure quickly:

SET profiling = 1; SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 20; SHOW PROFILES;

If you’re using MySQL 8+, you may prefer EXPLAIN ANALYZE (more on that soon).

Step 2: Read the Query Plan with EXPLAIN

Run EXPLAIN to see how MySQL executes your query:

EXPLAIN SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 20;

Key fields to focus on:

  • type: Access method. In general: ALL (bad, full scan) → rangerefconst (good).
  • possible_keys vs key: What indexes could be used vs what was actually used.
  • rows: Estimated rows scanned (lower is usually better).
  • Extra: Look for red flags like Using filesort or Using temporary.

If your plan shows a large rows estimate or Using filesort, you likely need a better index (or a different query shape).

Step 3: Build the Right Composite Index (Match WHERE + ORDER BY)

Our query filters by user_id and status, then sorts by created_at. A single-column index on user_id often isn’t enough, because MySQL still has to filter by status and then sort by created_at.

Create a composite index that matches the filtering and sorting pattern:

CREATE INDEX idx_orders_user_status_created ON orders (user_id, status, created_at DESC);

Now rerun EXPLAIN. Ideally you’ll see:

  • key = idx_orders_user_status_created
  • much smaller rows
  • no Using filesort (or at least less work)

Rule of thumb: Put equality filters first (user_id, status), then the column used for ordering/ranges (created_at).

Step 4: Use Covering Indexes to Avoid Extra Table Reads

If your query selects only columns that exist in the index, MySQL can satisfy it using the index alone (an “index-only” lookup). This avoids extra reads from the table’s clustered data pages, which can be a big win.

Our query selects id, status, total_cents, created_at. The composite index includes user_id, status, created_at—but not total_cents. If the dashboard truly needs total_cents, you can extend the index:

DROP INDEX idx_orders_user_status_created ON orders; CREATE INDEX idx_orders_cover ON orders (user_id, status, created_at DESC, total_cents, id);

In EXPLAIN, look for Extra containing Using index, which often indicates a covering index for that part of the query.

Trade-off: Bigger indexes speed up reads but slow down writes (INSERT/UPDATE) and consume more disk. Don’t “cover everything”; cover the queries that matter.

Step 5: Fix the #1 Pagination Trap (OFFSET Gets Slower)

Naive pagination like this gets slower as page numbers grow:

SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC LIMIT 20 OFFSET 20000;

MySQL still has to find and skip 20,000 rows before returning 20. Instead, use “seek pagination” (a.k.a. keyset pagination):

-- First page (no cursor) SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' ORDER BY created_at DESC, id DESC LIMIT 20; -- Next page: pass the last row’s (created_at, id) as a cursor SELECT id, status, total_cents, created_at FROM orders WHERE user_id = 42 AND status = 'paid' AND (created_at, id) < ('2026-03-26 10:15:00', 987654) ORDER BY created_at DESC, id DESC LIMIT 20;

This remains fast because it uses the index to jump directly to the next slice of data.

To support the tie-breaker sort, adjust the index order:

CREATE INDEX idx_orders_seek ON orders (user_id, status, created_at DESC, id DESC);

Step 6: Don’t Accidentally Disable Indexes

Some patterns prevent index usage or reduce effectiveness:

  • Functions on indexed columns:

    -- Bad: wraps column, often blocks index usage WHERE DATE(created_at) = '2026-03-26'
    -- Better: range query uses index
    WHERE created_at >= '2026-03-26 00:00:00'
    AND created_at < '2026-03-27 00:00:00'
  • Leading wildcard LIKE:

    -- Bad: can't use normal B-tree index efficiently WHERE email LIKE '%@example.com'

    Consider storing a derived column (e.g., domain) or using full-text/search tools depending on requirements.

  • Mismatched types (e.g., comparing string to integer) can force conversions. Keep column types consistent with inputs.

Step 7: Make Joins Fast with the Right Indexes

Suppose you join orders to order_items:

CREATE TABLE order_items ( id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT, order_id BIGINT UNSIGNED NOT NULL, sku VARCHAR(64) NOT NULL, qty INT UNSIGNED NOT NULL, INDEX idx_order_id (order_id) ) ENGINE=InnoDB; SELECT o.id, o.created_at, SUM(oi.qty) AS items FROM orders o JOIN order_items oi ON oi.order_id = o.id WHERE o.user_id = 42 AND o.status = 'paid' GROUP BY o.id, o.created_at ORDER BY o.created_at DESC LIMIT 20;

Make sure:

  • order_items.order_id is indexed (it is).
  • orders has an index that matches WHERE + ORDER BY (like (user_id, status, created_at)).

If you see huge row estimates or Using temporary for grouping, consider whether you can aggregate differently (e.g., store a cached item count on orders and update it transactionally if your app needs this frequently).

Practical Example: Running EXPLAIN in a Node.js App

If you’re debugging locally, it can help to log query plans from code. Here’s a minimal Node.js example using mysql2:

import mysql from "mysql2/promise"; const pool = mysql.createPool({ host: "127.0.0.1", user: "root", password: "password", database: "shop", connectionLimit: 10, }); async function explainOrders(userId) { const sql = ` EXPLAIN SELECT id, status, total_cents, created_at FROM orders WHERE user_id = ? AND status = 'paid' ORDER BY created_at DESC, id DESC LIMIT 20 `; const [rows] = await pool.query(sql, [userId]); console.table(rows); } await explainOrders(42); await pool.end();

This is a simple way to compare “before index” vs “after index” and validate improvements.

A Quick Optimization Checklist

  • Start with evidence: slow query log, APM traces, or repeated timing tests.
  • Run EXPLAIN (or EXPLAIN ANALYZE in MySQL 8+) and check rows, key, and Extra.
  • Create composite indexes that match WHERE (equalities first) and then ORDER BY / ranges.
  • Consider covering indexes for hot read paths—but don’t over-index.
  • Avoid OFFSET for deep pagination; use keyset pagination.
  • Don’t wrap indexed columns in functions in WHERE; use range conditions.
  • Index join keys on both sides when appropriate (FK columns almost always need indexes).

Wrap-Up

MySQL optimization isn’t magic—it’s about making your most important queries do less work. If you learn to read EXPLAIN and build indexes that reflect how your app actually queries data, you’ll eliminate most “mystery slowness” issues. Start with one slow query, measure, change one thing, and measure again. That feedback loop is what turns performance tuning into a reliable skill.


Leave a Reply

Your email address will not be published. Required fields are marked *