Guide
WordPress slow after plugin update — finding the hook that regressed
1. Problem
Tuesday at 3:14 PM the auto-updater ran. Wednesday morning support tickets start: "the editor is laggy", "Save Draft takes forever", "checkout spins for 4 seconds before going through". Your TTFB chart shows the homepage looks fine — it's served from the page cache. The pages that earn money or do work are 3-4x slower than they were last week. New Relic blames "PHP". The host blames "your code". Your search history reads "wordpress slow after plugin update which hook", "wordpress save_post slow regression", "wordpress hook performance degraded", and every result tells you to disable plugins one at a time on a staging clone you don't have.
This is the regression you cannot bisect by clicking. WordPress runs hundreds of registered callbacks on every request. A plugin update changes one of them — adds an outbound HTTP call inside save_post, swaps a fast get_option for a 200ms wp_remote_get inside init, drops a synchronous flush into woocommerce_checkout_order_processed. The page cache hides it from synthetic monitoring. The hook fires on uncached pages and authenticated users only. Your APM gives you a request-level timing without naming the callback. The request is slow; you cannot point at the line.
The signal that does point at the line is wp_hook_timing_ms — a per-hook, per-callback execution-time histogram emitted by the Logystera WordPress plugin on every PHP request. Combined with wp_hook_timing_count (how often each hook fired) and wp_hook_duration_ms (cumulative time in each hook), it lets you compare yesterday's distribution against today's and identify the exact callback that doubled in p95.
2. Impact
A hook regression after a plugin update is a slow leak that taxes everything that bypasses the cache.
- Checkout abandonment. A 600ms regression in
woocommerce_checkout_order_processedadds spinner time at the worst moment. Baymard data puts cart abandonment at +7% per extra second past 3s. On a $40k/month store, a regressed checkout hook erases roughly $2,800/month until rolled back. - Editor latency. A 400ms regression in
save_postadds delay to every Save Draft and autosave. On a team filing 60 posts a day, autosaves alone add 4-6 minutes of waiting per author per day. - REST API stall. Mobile apps and headless frontends hitting
/wp-json/wp/v2/poststouchrest_api_initon every request — uncached. A 300ms regression there becomes 300ms on every screen transition. - PHP-FPM concurrency loss. A 500ms regression on a site doing 50 req/s consumes 25 worker-seconds per second of wall clock. A 10-worker pool effectively loses 2.5 workers; spike traffic hits 502 instead of just being slow.
- Error rate looks normal. No
php.fatal. Nodb.slow_query. No 500s. The site never broke. It just got worse, on the pages your money lives on.
The painful version is the gradual one: a plugin pushes a 1.4.7 → 1.4.8 patch with a "performance improvement" that adds a logging callback to pre_get_posts. The site is 80ms slower per query, compounded across nested queries on archive pages. Nothing is "broken"; the site is 600ms slower than it was, indefinitely, and nobody can prove which update did it.
3. Why It’s Hard to Spot
WordPress does not surface per-hook timing anywhere visible. wp_footer does not print it. Site Health does not check for it. Query Monitor would show it in dev — and Query Monitor is disabled in production. The hosting dashboard graphs CPU and memory, neither of which spikes during this failure (the worker is blocked on outbound HTTP or blocking I/O, not CPU-bound).
Synthetic monitoring is worse than blind. Pingdom hits / every minute and gets a 200ms cached response. The actual victims — logged-in editors, REST consumers, checkout pages — never touch the page cache. They hit the regressed hook on every request and wait. The synthetic graph is flat; real users are on Twitter complaining.
The "disable plugins one at a time" method does not work here. The plugin is doing what it claims to do — not crashing, not throwing warnings, just 400ms slower on a callback that fires only when an editor saves a post or a checkout completes. Neither reproduces reliably by clicking around staging.
The deepest confusion: WordPress auto-updates are not logged with per-plugin timing impact. WordPress writes wp.state_change type=auto_update and moves on — no diff of what the update changed. The team looks at the deploy log (empty — no deploy went out) and concludes "must be the host". The host concludes "must be your code". The actual proof — that hook X in plugin Y went from p95=80ms to p95=460ms at 15:14 last Tuesday — only exists if someone was recording per-hook timing across the window.
4. Cause
wp_hook_timing_ms is a histogram emitted by the Logystera WordPress plugin around every registered hook execution. The plugin captures wall-clock duration with microtime(true) before and after each callback and records the delta against three metrics:
wp_hook_timing_count{hook, callback}— counter, incremented once per callback execution.wp_hook_duration_ms{hook, callback}— counter accumulating total milliseconds spent in each callback.wp_hook_timing_ms{hook, callback}— histogram bucketing per-execution durations (5ms, 25ms, 100ms, 500ms, 2000ms, +Inf).
The three compose: count tells you how often a callback runs, duration_ms gives total contribution to request time (duration_ms / count = average), and timing_ms lets you compute p50, p95, p99 per callback. A regression shifts the histogram right — same count, higher buckets populated, higher p95.
WordPress's hook system is a registry of (priority, callback) tuples per tag. When a plugin update changes a callback's body, the hook name and priority stay the same — only wp_hook_duration_ms for that (hook, callback) pair changes. That's the regression fingerprint: same count, higher duration_ms, p95 jumps in wp_hook_timing_ms, all starting at the timestamp the plugin updated.
5. Solution
Compare the current window's per-hook distribution against a 7-day baseline, identify the callback that regressed, find which plugin owns it, and roll back or pin the version. Work from the hook list down to the callback.
5.1 Diagnose
Start where the hook timing data lives. The Logystera plugin emits one http.request envelope per request with a nested hooks[] array; the processor splits it into wp_hook_timing_ms, wp_hook_timing_count, and wp_hook_duration_ms keyed by hook and callback. If you query VictoriaMetrics directly, the labels are intact.
Step 1 — confirm the regression window matches a plugin update. Pull wp.state_change events for the last 14 days and overlay them with wp_request_duration_by_route_ms p95:
# Pull recent plugin updates from the WP audit log
wp option get cron --format=json | jq '.' | head
grep "auto_update" /var/log/wp-audit.log | tail -20
# → each line has timestamp + plugin slug + old/new version
# Cross-reference with site slow-down window:
# "wp.state_change type=auto_update plugin=woocommerce 1.4.7→1.4.8 at 2026-04-22T15:14:02Z"
This anchors the regression to a real event. Pure "things are slow" is not actionable; "things got slower at 15:14 on 22 April, immediately after WooCommerce auto-updated 1.4.7 → 1.4.8" is.
Step 2 — diff the per-hook p95 between today and a 7-day baseline. This is the signal that points at the line.
# Today's p95 per hook (last 1h)
histogram_quantile(0.95,
sum by (hook, callback, le) (
rate(wp_hook_timing_ms_bucket{entity="prod-store"}[1h])
)
)
# 7-day baseline p95 per hook (same hour-of-day, 7 days ago)
histogram_quantile(0.95,
sum by (hook, callback, le) (
rate(wp_hook_timing_ms_bucket{entity="prod-store"}[1h] offset 7d)
)
)
Subtract. Sort descending. The top row is your regression — the (hook, callback) pair whose p95 grew most. Typical output for a real regression looks like:
hook callback p95_now p95_7d delta
woocommerce_checkout_order_processed WC_Subscriptions::on_order_processed 462ms 78ms +384ms
save_post Yoast\WP\SEO\Indexables::on_save 188ms 54ms +134ms
init Wordfence::initAction 312ms 298ms +14ms
The first row is the culprit. The third row is noise.
Step 3 — confirm count did not change. A regression is about each execution being slower, not about the hook firing more often. Check wp_hook_timing_count for the same (hook, callback) over the same windows.
sum(rate(wp_hook_timing_count{hook="save_post", callback=~".*Yoast.*"}[1h]))
sum(rate(wp_hook_timing_count{hook="save_post", callback=~".*Yoast.*"}[1h] offset 7d))
If count is roughly equal and duration_ms doubled, you have a per-execution regression. If count doubled and per-execution timing held, you have a firing-frequency regression — different bug, usually a hook registered twice.
Step 4 — tie the callback back to a specific plugin file. The callback label is the fully qualified handler name (WC_Subscriptions::on_order_processed, Yoast\WP\SEO\Indexables::on_save, closure-/wp-content/plugins/foo/bar.php:42). Map it to a plugin slug:
grep -rn "function on_order_processed" /var/www/html/wp-content/plugins/woocommerce-subscriptions/
# → confirms the callback lives in woocommerce-subscriptions and which file
wp plugin list --status=active --format=csv | grep woocommerce-subscriptions
# → returns: woocommerce-subscriptions,active,available,5.7.2
Now you have plugin + version + the exact hook + the exact callback. Cross-reference against the auto-update log from Step 1 — if 5.7.2 is the post-update version, you have proof.
5.2 Root Causes
Hook regressions after a plugin update almost always fall into one of these patterns. Each leaves a different fingerprint on the three metrics.
- Outbound HTTP added to a hot hook. A plugin adds a license check, telemetry ping, or "show me the current promo banner"
wp_remote_gettoinitoradmin_init. Signal pattern:wp_hook_timing_msp95 jumps by 200-2000ms in lockstep,wp_hook_timing_countstable, regression begins exactly at the auto-update timestamp fromwp.state_change. Common culprits: cache plugins (license validation), security plugins (cloud rule sync), backup plugins (status check). - Synchronous remote call inside
save_post/woocommerce_checkout_order_processed. Plugins push order data to an external CRM/ERP/sync service, blocking the save. Signal pattern: regression only appears on uncached, authenticated, write paths;wp_request_duration_by_route_msforPOST /wp-admin/post.phpandPOST /?wc-ajax=checkoutjumps;wp_hook_timing_mson the relevant hook accounts for the bulk of it. Common culprits: WooCommerce extensions, marketing automation, ERP connectors. - Index rebuild added to
pre_get_postsorthe_posts. SEO and search plugins occasionally introduce a "warm the index" call inside a frontend query hook. Signal pattern:wp_hook_timing_countforpre_get_postsstays the same, butwp_hook_duration_msdoubles or triples; affects archive and search pages disproportionately. - DB query added to a frequently-fired hook.
wp_loaded,init,template_redirectfire on every request. A plugin addingget_optionwith uncached key, or$wpdb->get_results()inside one of these, multiplies cost across the site. Signal pattern: low per-execution delta (10-40ms) but highcount, so cumulativewp_hook_duration_msregresses heavily. - Hook registered twice (dedup bug in update). A refactor accidentally calls
add_actionin both the old and new location. Signal pattern:wp_hook_timing_countdoubles for that callback whilewp_hook_timing_msper-execution stays normal. Totalwp_hook_duration_msdoubles.
5.3 Fix
Once you have plugin + version + hook + callback:
- Pin the previous version immediately.
wp plugin install. Then disable auto-updates for that plugin so it does not re-apply tonight:--version= --force --activate wp plugin auto-updates disable. - Verify the rollback restored the baseline. Re-run the p95 query from Step 2 of §5.1. The delta should drop to within ±20ms of the 7-day baseline within 5-10 minutes of FPM workers cycling through.
- Open a vendor bug with the proof. The histogram diff is reproducible evidence. Attach the PromQL output, the
wp.state_changetimestamp, and the callback name. "The site is slow" gets closed; "yourWC_Subscriptions::on_order_processedp95 went 78ms → 462ms in 5.7.2 vs 5.7.1" gets a hotfix. - For HTTP-call regressions, add a defensive timeout. If the plugin's outbound call has no timeout, a slow external service can hang the hook for
default_socket_timeoutseconds (typically 60). Use thehttp_request_argsfilter to cap it at 5s while you wait for the vendor patch. - For dedup bugs, deactivate-then-activate the plugin. A double-registered hook usually clears on reactivation because
add_actioncalls don't survive the deactivation teardown.
5.4 Verify
Verification has two halves: the regressed callback returns to baseline, and the request-level metric (wp_request_duration_by_route_ms) confirms the user-visible improvement.
- Signal that should stop: the per-callback p95 in
wp_hook_timing_ms{hook="returns to within ±15% of the 7-day baseline. If you saw 78ms → 462ms before rollback, expect 78ms ± 12ms after.", callback=" "} - Expected baseline noise: every hook has steady-state variance. Healthy
save_postp95 typically sits between 40-120ms depending on plugin stack.initbaseline is usually 50-300ms.woocommerce_checkout_order_processedis 60-200ms on a clean store. Don't expect zero — expect "back where it was last week". - Timeframe: within 10 minutes of the rollback, FPM workers have cycled through the new code path. If the histogram has not returned to baseline within 30 minutes under normal traffic, the rollback didn't apply (check
wp plugin listoutput for the actual installed version) or you misidentified the callback. - Cross-check at the request layer:
wp_request_duration_by_route_msfor the affected route (e.g.POST /wp-admin/post.phpfor editor saves,POST /?wc-ajax=checkoutfor checkout) should drop by approximately the same delta you saw on the regressed callback. Ifwp_hook_timing_msrecovered butwp_request_duration_by_route_msdid not, there's a second regression on the same route — go back to Step 2 and re-run the diff.
Generic "the site feels faster" verification is not acceptable. The histogram is the proof.
6. How to Catch This Early
Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.
This issue surfaces as wp_hook_timing_ms.
Everything you just did manually — anchor the regression to a wp.state_change auto-update event, diff today's per-callback p95 against a 7-day baseline, separate count regressions from per-execution regressions, map the callback back to a plugin file — Logystera does automatically. The same wp_hook_timing_ms histogram you just queried is recorded on every request from every entity, alongside wp_hook_timing_count and wp_hook_duration_ms, and a rule fires when any callback's p95 exceeds its 7-day baseline by a configurable threshold.
!Logystera dashboard — wp_hook_timing_ms over time wp_hook_timing_ms per-callback p95 over the last 24h, with a step jump at 15:14 immediately after a WooCommerce auto-update.
!Logystera alert — Hook timing regression detected Critical alert fires within 5 minutes of a sustained p95 regression, naming the exact (hook, callback) pair and the wp.state_change event that preceded it.
The fix is simple once you know the problem. The hard part is knowing it happened at all — and knowing it the moment the auto-updater ran, not three days later when an editor finally complained. Logystera turns this kind of failure from a customer-reported "the site feels slow" into a 5-minute notification with the histogram, the callback name, the plugin version, and the auto-update event that caused it.
7. Related Silent Failures
wp_slow_requests_total— request-level slowness when per-hook attribution isn't yet available.wp_request_peak_memory_mb— peak memory per request. Hook regressions that load heavy data into memory show up here too.wp.state_change type=auto_update— the trigger event for most regressions. Pair withwp_hook_timing_msfor full attribution.db.slow_query— distinguishes hook regressions from query regressions; if both fire together, the hook is making a slow query.http.outbound— catches the "license check added toinit" pattern when the regression is in a network call rather than CPU.
See what's actually happening in your WordPress system
Connect your site. Logystera starts monitoring within minutes.