Guide

WordPress request latency — three causes, three drilldowns, one root cause

Your WordPress site is slow. Not down — slow. The homepage takes 4.

1. Problem

Your WordPress site is slow. Not down — slow. The homepage takes 4.2 seconds when it used to take 800ms, the WooCommerce cart page is timing out at 30s every fifth request, and customers are emailing the support inbox saying "is it just me?" New Relic isn't installed, your hosting dashboard shows green, and top on the server looks fine — load average 1.4, no swap, MySQL idle.

If you Google "wordpress slow page debug step by step" you get a wall of CDN/caching listicles that miss the real diagnostic question: which of the three causes owns this latency? Because there are only three. Every WordPress slow-request incident reduces to one of: a slow database query, a misbehaving hook (plugin or theme code burning CPU on every request), or memory pressure forcing PHP into thrashing GC cycles. The skill is not knowing the causes — it's knowing which one this is, in under five minutes, without restarting anything.

This is the wordpress request latency root cause triage flow. It surfaces as a wp_request_duration_ms spike with route-group drilldowns, and the trick is following the drill chain in order: total duration → DB time → hook time → peak memory. Whichever bucket owns the spike owns the bug.

2. Impact

Slow WordPress requests don't trip uptime monitors — they bleed conversion. Google's own field data is clear: every additional second of TTFB above 1s drops e-commerce conversion by roughly 7%. A WooCommerce store doing $300k/month at a 2.4% baseline conversion that slips to 1.9% during a hook regression loses about $62k/month, and the team usually doesn't notice until the monthly Stripe report comes in three weeks late.

The operational cost is worse for paid traffic. If your /checkout/ route takes 8s in p95 because WC_Cart::calculate_totals is calling a pre_get_posts hook that fires a synchronous remote API call, every Google Ads click that lands during business hours gets a degraded experience — and Quality Score adjustments lag 24–72 hours behind the actual user pain, so you keep paying full CPC for a broken funnel. We've seen sites burn $4k of ad spend in a weekend through a slow checkout that the team only noticed Monday morning.

The trust cost compounds. WordPress doesn't tell users their request was slow — the page just renders late, the cart abandons silently, and the customer never returns. wp_slow_requests_total quietly ticks up, wp_state_changes_total shows the plugin update from Friday at 16:47 UTC, and on Monday someone asks "did anything change?" while staring at a Grafana panel that wasn't watching the right metric.

3. Why It’s Hard to Spot

WordPress is a request-shaped workhorse: every page load runs wp-settings.php → autoloaded options → initparse_request → template hierarchy → all hooks → render. Latency can hide in any one of those phases, and the user-facing symptom is identical for all of them — a slow blank tab. The wp_footer debug bar isn't on in production, Query Monitor is admin-only, and WP-CLI's wp profile is a per-request snapshot that catches one user's pain at a time.

Standard tools miss this for three compounding reasons. First, top and htop see aggregate CPU — a single PHP-FPM worker burning 600ms inside a hook looks identical to one rendering a healthy 200ms page if the request rate is even slightly variable. Second, MySQL's slow query log defaults to long_query_time = 10 on most managed hosts, so a query that runs 800ms 50 times per page hides under the radar. Third, hosting dashboards like Kinsta, WP Engine, and SiteGround show "PHP workers in use" but don't break down what those workers are spending time on — DB, hooks, or memory GC.

The kicker: a slow site usually has all three failure modes happening at low intensity simultaneously. A 20ms hook overhead, plus a 50ms DB query, plus 80MB of peak memory each look fine in isolation. Stacked on a checkout page with 14 active plugins, you get 2.5s of unattributed latency that nobody owns. This is why the drilldown chain matters — you need to see which bucket grew, not just that total time grew.

4. Cause

WordPress emits a wp_request_duration_ms signal at the end of every request — the wall-clock time from muplugins_loaded to shutdown. The Logystera WP plugin labels this metric with route_group (e.g. frontend, admin, rest, wp-cron, checkout) so you can isolate which surface is degrading. In a healthy state, frontend p95 sits at 300–600ms, admin at 800–1500ms, REST at 100–400ms.

When p95 climbs, the time has to come from somewhere. The plugin emits three companion signals, each measuring one bucket:

  • wp_db_query_time_ms — total DB wall-time per request, summed across every $wpdb->query() call. Measured by hooking query with WP_DEBUG instrumentation.
  • wp_hook_timing_total_ms — total time spent in do_action and apply_filters callbacks, broken down by hook name. The plugin wraps WP_Hook::apply_filters to time each callback.
  • wp_request_peak_memory_mbmemory_get_peak_usage(true) at request end. Spikes correlate with PHP's ZEND_GC_THRESHOLD triggering full GC sweeps that pause execution.

The arithmetic is simple: wp_request_duration_ms ≈ wp_db_query_time_ms + wp_hook_timing_total_ms + render + memory GC overhead. When total grows, exactly one of the three sub-signals grows with it. That sub-signal names the cause. The drilldown chain is wired from wp_request_duration_mswp_db_query_time_mswp_hook_timing_total_mswp_request_peak_memory_mb for this reason.

5. Solution

5.1 Diagnose (logs first)

The triage flow is three drilldowns, in order. Don't skip ahead — each step rules out one bucket before the next.

1. Confirm the spike and isolate the route group.

# Tail the WP plugin's debug stream for high-latency requests
tail -f /var/log/php-fpm/error.log | grep "wp_request_duration_ms"
# Or via WP-CLI on the host:
wp eval 'echo get_option("logystera_recent_slow_requests");' --skip-plugins=woocommerce

You're looking for a line like:

[2026-04-26 14:47:03] wp_request_duration_ms route_group=checkout value=4218 entity_id=42

That route_group=checkout is the first cut. If only checkout is slow but frontend is healthy, the cause is plugin code that runs in checkout context (cart hooks, payment gateway code, tax calc) — not a global issue. Note the timestamp: 14:47. You'll correlate against deploys in step 4.

2. Drilldown 1 — DB time.

# Compare wp_db_query_time_ms against total duration
grep "wp_db_query_time_ms" /var/log/php-fpm/error.log | tail -n 50

# If suspicious, enable Query Monitor's DB log temporarily and check slow queries:
mysql -e "SHOW FULL PROCESSLIST;" | grep -v Sleep
mysql -e "SELECT * FROM mysql.slow_log ORDER BY start_time DESC LIMIT 20;"

If wp_db_query_time_ms is >40% of wp_request_duration_ms, the bucket is database. This signal traces back to wp_db_query_time_ms. Expect to find an unindexed wp_postmeta join, a WP_Query with posts_per_page=-1, or a plugin doing get_posts() inside a the_content filter (catastrophic — runs on every post in the loop).

3. Drilldown 2 — hook time.

# If DB is normal, check hook timing
grep "wp_hook_timing_total_ms" /var/log/php-fpm/error.log | tail -n 50

# Per-hook breakdown — the plugin emits sub-signals with hook= label:
grep "wp_hook_timing_per_hook" /var/log/php-fpm/error.log | sort -k4 -n | tail -n 20

If wp_hook_timing_total_ms is the biggest contributor (and wp_db_query_time_ms is healthy), you're in hook overhead territory. The per-hook breakdown names the offender — init, wp_loaded, template_redirect, woocommerce_checkout_process are the usual suspects. A 600ms init hook means a plugin is calling wp_remote_get() synchronously on every page.

4. Drilldown 3 — memory pressure.

grep "wp_request_peak_memory_mb" /var/log/php-fpm/error.log | tail -n 50
# Cross-check PHP memory_limit:
php -i | grep memory_limit

If peak memory is approaching memory_limit (e.g. 240MB on a 256MB limit), PHP's GC enters thrashing mode — full sweeps every few thousand allocations, which serialize execution. This is the silent killer: no error fires, but every request loses 200–800ms to GC. This signal traces back to wp_request_peak_memory_mb.

5. Time-correlate with deploys and plugin updates.

# Did a plugin update or state change land before the spike?
grep "wp_state_changes_total" /var/log/php-fpm/error.log | grep "$(date +%Y-%m-%d)"

# Check WP plugin file mtimes for recent updates:
find /var/www/wordpress/wp-content/plugins -name "*.php" -mtime -1 -ls | head -n 20

The story you want: "wp_request_duration_ms p95 jumped from 600ms to 3.2s at 14:47 UTC, immediately after a wp_state_changes_total event for woocommerce-subscriptions v6.4.0 → v6.4.1 at 14:46 UTC, with the growth concentrated in wp_hook_timing_total_ms on the woocommerce_checkout_process hook." That's a complete diagnosis in one sentence — and it's the caption you'll see in §6.

5.2 Root Causes

Each cause maps to one of the three drilldown buckets and produces a specific signal pattern.

  • Unindexed wp_postmeta query (DB bucket) — most common in WooCommerce stores with custom meta. Produces wp_db_query_time_ms >50% of total duration, slow query log shows SELECT * FROM wp_postmeta WHERE meta_key = '_custom_field' without meta_value index. Often correlated with wp_slow_requests_total on /shop/ and /product-category/ routes.
  • Plugin doing wp_remote_get inside a hook (hook bucket) — synchronous HTTP call inside init, wp_loaded, or template_redirect blocks the request. Produces wp_hook_timing_total_ms spike with a single hook owning >80% of hook time. Common pattern: license-check plugins phoning home on every admin page load.
  • autoload=yes option bloat (memory bucket) — wp_options table has 50MB+ of autoloaded data (typically from a logging plugin that never cleans up). Produces wp_request_peak_memory_mb >200MB on every request. Detection: SELECT SUM(LENGTH(option_value)) FROM wp_options WHERE autoload='yes';
  • Object cache miss cascade (DB bucket) — Redis/Memcached down or misconfigured, every request rebuilds the alloptions cache, every term query re-runs. Produces wp_db_query_time_ms doubling across all routes simultaneously. Correlated with absence of cache_hit_ratio signal.
  • Recursive hook firing (hook bucket) — plugin A's hook triggers plugin B's hook which triggers plugin A's hook. Produces wp_hook_timing_total_ms with the same hook name appearing 50+ times in the per-hook breakdown. Often follows a wp_state_changes_total event for plugin activation.
  • Large transient generation (memory bucket) — a plugin building a 30MB array transient on first request after expiry. Produces wp_request_peak_memory_mb periodic spikes every 12h or 24h matching transient expiry windows.

5.3 Fix

Match the fix to the drilldown bucket the diagnosis named.

DB bucket:

-- Find the worst meta_key without an index:
SELECT meta_key, COUNT(*) FROM wp_postmeta GROUP BY meta_key ORDER BY 2 DESC LIMIT 20;

-- Add a composite index for hot keys (test on staging first):
ALTER TABLE wp_postmeta ADD INDEX idx_meta_key_value (meta_key(50), meta_value(50));

-- For autoload bloat:
UPDATE wp_options SET autoload='no' WHERE option_name LIKE '_transient_%';
SELECT option_name, LENGTH(option_value) FROM wp_options
  WHERE autoload='yes' ORDER BY 2 DESC LIMIT 20;

Hook bucket: identify the offending hook from wp_hook_timing_per_hook, then either deactivate the plugin (fastest test) or wrap the hook callback to async-defer it via Action Scheduler. For wp_remote_get in hooks, set 'timeout' => 2 and 'blocking' => false — never block the page on an external API.

Memory bucket: raise WP_MEMORY_LIMIT to 384M as a stopgap, then find the autoload bloat. If a plugin generates large transients, schedule them via WP-Cron rather than building on first request.

// wp-config.php — temporary headroom while you find the real cause
define('WP_MEMORY_LIMIT', '384M');

After every fix, force a wp cache flush and a real-traffic warm-up — the first request after flush will look slow and is not representative.

5.4 Verify

You're looking for wp_request_duration_ms p95 to drop back to baseline AND for the bucket signal you fixed to drop with it.

# After the fix, watch for 15 minutes under normal traffic:
grep "wp_request_duration_ms" /var/log/php-fpm/error.log \
  | awk '{print $4}' | sort -n | tail -n 20

# The bucket you fixed should also drop:
grep "wp_db_query_time_ms" /var/log/php-fpm/error.log | tail -n 50

Healthy baselines for wp_request_duration_ms p95:

  • route_group=frontend: 300–600ms (with object cache enabled)
  • route_group=admin: 800–1500ms (admin pages legitimately do more work)
  • route_group=rest: 100–400ms (per endpoint)
  • route_group=wp-cron: variable, but individual cron events <2s

If wp_request_duration_ms p95 drops but wp_slow_requests_total keeps ticking, you fixed the average case but a tail of slow requests remains — usually a second cause hiding behind the first. Re-run the drilldown chain on the slow tail.

If 30 minutes pass with p95 under your baseline and wp_slow_requests_total flat, the issue is resolved. If wp_request_duration_ms is back but wp_request_peak_memory_mb is still climbing, you have a memory leak that hasn't yet manifested as latency — fix it before next traffic peak.

6. How to Catch This Early

Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.

This issue surfaces as wp_request_duration_ms.

Everything you just did manually — tail the log for slow requests, drilldown into DB time, then hook time, then memory, time-correlate with the plugin update — Logystera does automatically. The WP plugin emits wp_request_duration_ms with route_group labels on every request, and the dashboard wires the drilldown chain so a click on the latency panel pivots straight to wp_db_query_time_ms, then wp_hook_timing_total_ms, then wp_request_peak_memory_mb for the same time window and entity. You walk down the chain in the UI, not in grep.

!Logystera dashboard — wp_request_duration_ms over time wp_request_duration_ms p95 by route_group, last 24h — checkout spike at 14:47 UTC, immediately after the WooCommerce Subscriptions auto-update.

The rule that fires is id 711 — WordPress request latency anomaly, severity warning at 2x baseline, critical at 4x baseline, evaluated over a 5-minute window per route group. The alert payload includes the route_group, the current p95, the baseline p95, and the dominant bucket from the drilldown chain — so you don't have to drill manually when the alert lands at 02:30.

!Logystera alert — WordPress request latency anomaly Critical alert fires at 4x baseline p95, with the dominant bucket (wp_hook_timing_total_ms) named in the alert body.

The fix is simple once you know the problem. The hard part is knowing it happened at all. Logystera turns "the site feels slow" — the worst class of incident, because the symptom is subjective and the cause is buried — into a 60-second notification that names the route group, the bucket, and the plugin state change that preceded it.

7. Related Silent Failures

  • wp_hook_timing_total_ms regression after auto-update — same drilldown chain, but specific to the post-update window. Catches plugin updates that 2x a hook's runtime overnight.
  • wp_db_query_time_ms spike from autoload bloatwp_options table grows past 10MB of autoloaded data, every request slows uniformly. Often invisible until a 2x traffic spike compounds it.
  • wp_request_peak_memory_mb near WP_MEMORY_LIMIT — silent GC thrashing without a fatal error. The site doesn't crash, it just gets slower, and the only signal is peak memory creeping up week over week.
  • wp_slow_requests_total on wp-cron — long-running cron jobs starve PHP-FPM workers, slowing all other route groups. The cause is in /wp-cron.php but the symptom shows on /.
  • wp_state_changes_total correlated with latency — every WordPress latency regression has a state change behind it. If you only have one panel, put these two side by side.

See what's actually happening in your WordPress system

Connect your site. Logystera starts monitoring within minutes.

Logystera Logystera
Monitoring for WordPress and Drupal sites. Install a plugin or module to catch silent failures — cron stalls, failed emails, login attacks, PHP errors — before users report them.
Company
Copyright © 2026 Logystera. All rights reserved.