MySQL Optimization for Web Developers: Make Slow Queries Fast with Indexes, EXPLAIN, and Safer Pagination

MySQL Optimization for Web Developers: Make Slow Queries Fast with Indexes, EXPLAIN, and Safer Pagination

“My page loads are fine locally, but production is slow.” Very often, the culprit is a MySQL query that works on small datasets but falls apart as tables grow. The good news: you don’t need black magic to fix it. With a few practical habits—using the right indexes, reading EXPLAIN, avoiding expensive patterns, and paginating correctly—you can get big wins quickly.

This hands-on guide shows a realistic workflow and working examples you can copy into your own project.

1) Start with the “shape” of your query (and measure it)

Before changing anything, capture the query and measure it. In MySQL 8+, you can use EXPLAIN ANALYZE to see actual execution timing (not just the plan).

-- Your slow query (example) SELECT id, user_id, status, created_at FROM orders WHERE user_id = 42 AND status = 'PAID' ORDER BY created_at DESC LIMIT 20; -- See the plan EXPLAIN ANALYZE SELECT id, user_id, status, created_at FROM orders WHERE user_id = 42 AND status = 'PAID' ORDER BY created_at DESC LIMIT 20; 

What you’re looking for:

  • rows (or “rows examined” in analyze output) should be small for fast queries.
  • type should ideally be ref, range, or const (not ALL which usually means full table scan).
  • Using filesort often indicates MySQL had to sort manually (sometimes OK, often slow at scale).

2) Build the right indexes (single-column vs composite)

The #1 optimization most web apps need is indexing. But not “index everything”—index for the queries you actually run.

Rules of thumb that work well:

  • Index columns used in WHERE, JOIN, and ORDER BY.
  • If your query filters by multiple columns, you often want a composite index (multi-column).
  • Index order matters: put the most selective equality filters first, then range/sort columns.

For the query above, we filter by user_id and status, then sort by created_at. A good composite index:

CREATE INDEX idx_orders_user_status_created ON orders (user_id, status, created_at DESC); 

Now rerun EXPLAIN ANALYZE. Ideally, MySQL uses the new index to filter and return rows already ordered—no big sort step.

Common junior mistake: Creating three separate indexes:

-- Often not as good as one composite index for this query: CREATE INDEX idx_orders_user ON orders (user_id); CREATE INDEX idx_orders_status ON orders (status); CREATE INDEX idx_orders_created ON orders (created_at); 

MySQL can sometimes use index merge, but it’s usually slower than a well-designed composite index for a hot query path.

3) Fix “LIKE %something%” search with generated columns (practical approach)

Search queries can destroy performance, especially with LIKE '%term%'. A common compromise is “prefix search” (LIKE 'term%'), which can use an index.

If you want case-insensitive prefix search on emails, normalize into a generated column:

ALTER TABLE users ADD COLUMN email_lower VARCHAR(255) GENERATED ALWAYS AS (LOWER(email)) STORED, ADD INDEX idx_users_email_lower (email_lower); -- Now use: SELECT id, email FROM users WHERE email_lower LIKE 'alex%'; 

This keeps queries fast without adding application-side complexity.

4) Stop using SELECT * on hot paths

SELECT * isn’t just style—it can force extra I/O and block certain optimizations. Return only what the endpoint needs, especially in list endpoints.

-- Instead of SELECT * ... SELECT id, title, price FROM products WHERE is_active = 1 ORDER BY updated_at DESC LIMIT 50; 

Bonus: smaller network payloads and less memory usage in your app.

5) Pagination that scales: avoid deep OFFSET, use keyset pagination

Classic pagination uses LIMIT ... OFFSET .... It’s fine for small pages, but deep offsets get expensive because MySQL must scan/skips many rows.

-- Page 5000 (bad at scale) SELECT id, user_id, created_at FROM orders WHERE user_id = 42 ORDER BY created_at DESC LIMIT 20 OFFSET 99980; 

Keyset pagination (aka “seek method”) scales better. You use the last item from the previous page as a cursor.

First page:

SELECT id, user_id, created_at FROM orders WHERE user_id = 42 ORDER BY created_at DESC, id DESC LIMIT 20; 

Next page (pass the last row’s created_at and id):

-- cursorCreatedAt = '2026-02-01 10:15:00' -- cursorId = 12345
SELECT id, user_id, created_at
FROM orders
WHERE user_id = 42
AND (created_at < '2026-02-01 10:15:00'
OR (created_at = '2026-02-01 10:15:00' AND id < 12345))
ORDER BY created_at DESC, id DESC
LIMIT 20;

Key details:

  • Order by a stable, unique tie-breaker (id) to prevent duplicates or missing rows.
  • Add an index matching your order and filter pattern.
CREATE INDEX idx_orders_user_created_id ON orders (user_id, created_at DESC, id DESC); 

6) Make joins cheaper: index foreign keys and join columns

If you join tables, both sides need helpful indexes. For example: list orders with user email.

SELECT o.id, o.total_cents, u.email FROM orders o JOIN users u ON u.id = o.user_id WHERE o.status = 'PAID' ORDER BY o.created_at DESC LIMIT 50; 

Indexes to consider:

  • users.id is usually already a primary key (indexed).
  • orders.user_id should be indexed (often via a composite index).
  • Filtering and sorting columns should appear in an index that matches the query.
CREATE INDEX idx_orders_status_created_user ON orders (status, created_at DESC, user_id); 

Then EXPLAIN again. A good plan typically shows MySQL using the index to find the top 50 paid orders quickly, then joining users by primary key.

7) A practical “debugging checklist” for slow endpoints

When a list endpoint is slow, run through this checklist:

  • Is the query scanning too many rows? Use EXPLAIN ANALYZE, look at rows examined.
  • Does it have a composite index that matches WHERE + ORDER? Build one.
  • Is it sorting huge result sets? Get ordering from the index when possible.
  • Is pagination using deep OFFSET? Switch to keyset pagination.
  • Is the query pulling unnecessary columns? Remove *.
  • Are joins missing indexes? Index the join keys and filter columns.

8) Working example: keyset pagination in a minimal API endpoint

Below is a small Node.js (Express) example using mysql2 with keyset pagination. You can adapt the same pattern in any language.

// npm i express mysql2 import express from "express"; import mysql from "mysql2/promise"; const app = express(); const pool = mysql.createPool({ host: "localhost", user: "root", password: "password", database: "shop", connectionLimit: 10, }); app.get("/api/orders", async (req, res) => { const userId = Number(req.query.userId); if (!Number.isFinite(userId)) return res.status(400).json({ error: "userId is required" }); // Cursor params (optional) const cursorCreatedAt = req.query.cursorCreatedAt; // ISO string const cursorId = req.query.cursorId ? Number(req.query.cursorId) : null; const pageSize = Math.min(Number(req.query.limit) || 20, 100); let sql = ` SELECT id, user_id, status, created_at, total_cents FROM orders WHERE user_id = ? `; const params = [userId]; if (cursorCreatedAt && Number.isFinite(cursorId)) { sql += ` AND (created_at < ? OR (created_at = ? AND id < ?)) `; params.push(cursorCreatedAt, cursorCreatedAt, cursorId); } sql += ` ORDER BY created_at DESC, id DESC LIMIT ? `; params.push(pageSize); const [rows] = await pool.query(sql, params); // Next cursor = last row on this page const last = rows.length ? rows[rows.length - 1] : null; res.json({ data: rows, nextCursor: last ? { cursorCreatedAt: last.created_at, cursorId: last.id } : null, }); }); app.listen(3000, () => console.log("Listening on http://localhost:3000")); 

This endpoint remains fast even when the orders table grows large—assuming you created the matching index:

CREATE INDEX idx_orders_user_created_id ON orders (user_id, created_at DESC, id DESC); 

9) Final tips you can apply today

  • Index intentionally: Start from your slowest queries, not from your schema.
  • Use EXPLAIN ANALYZE as your feedback loop: change one thing, re-check the plan.
  • Prefer keyset pagination for large datasets: it’s one of the biggest “free wins” for web apps.
  • Keep queries narrow: fewer columns, fewer rows, fewer joins on hot endpoints.

If you take just one habit from this article, make it this: every time you add a new list endpoint, write the query, run EXPLAIN ANALYZE, and add the index that matches how the endpoint filters and sorts. That single workflow prevents most production slowdowns before they happen.


Leave a Reply

Your email address will not be published. Required fields are marked *