Guide
WordPress slow queries — finding the plugin or theme responsible
1. Problem
Your WordPress admin takes 8 seconds to render /wp-admin/edit.php. The frontend feels sluggish on category pages. Occasionally a request times out at 30 seconds and the visitor sees a blank page or 504 Gateway Timeout. New Relic, if you have it, points at "MySQL" and shrugs. The hosting dashboard says CPU is fine, then suddenly says it isn't. You search "wordpress slow query identify plugin" and get fifteen articles telling you to install another plugin to find the slow plugin.
The honest truth is that WordPress doesn't tell you which plugin is responsible for a slow query, because by the time the query reaches MySQL, the call stack is gone. WordPress fires hundreds of hooks per request, each plugin hangs callbacks off those hooks, and any of those callbacks can run a WP_Query, an update_option(), or a custom $wpdb->get_results() against an unindexed meta table. From MySQL's perspective it's all one anonymous client called wordpress.
This guide shows how to attribute a slow query back to the exact hook, callback, and plugin that fired it — using perf.hook_timing as the primary diagnostic signal, correlated with db.slow_query, slow http.request events, and php.warning entries that appear when the request bumps against max_execution_time.
2. Impact
A WordPress site that takes 6+ seconds to render an admin page is an unworkable site. Editors stop trusting "Save Draft". Auto-save piles up duplicate revisions. WP-Cron events queued to the same request start missing schedules because the previous request hasn't finished. On the frontend, slow queries cascade: PHP-FPM workers stay busy longer, the worker pool fills, new visitors get queued, queue fills, and now you're returning 502s while CPU still looks fine because everyone is waiting on MySQL.
The commercial impact is direct. Google's Core Web Vitals will register the LCP regression within days, and category and tag archives — exactly the pages that drive long-tail organic traffic — are typically where unindexed meta_query lookups live. You lose rankings before you notice the symptom. On WooCommerce, a slow wp_options autoload or a slow wc_get_orders() query during checkout is a measurable drop in conversion rate, not a theoretical one.
The deeper risk is misdirected blame. Without per-hook attribution, the team rolls back the wrong plugin, swaps hosts, or "upgrades MySQL" — none of which fix the offending callback that someone added to pre_get_posts six months ago.
3. Why It’s Hard to Spot
WordPress hides its own performance internals on purpose. The dashboard's Site Health screen will tell you "your site is healthy" while a pre_get_posts filter takes four seconds per request. Uptime monitors hit a status endpoint that doesn't trigger the offending hook. CDN edge caches mask the frontend symptom for anonymous traffic, so the only people experiencing the slowness are logged-in editors and customers in checkout — exactly the people who don't show up in your synthetic monitoring.
MySQL slow logs, when you have access to them, list SQL but not callers. You see SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts INNER JOIN wp_postmeta... and you have no idea which WP_Query instance built that. Every plugin uses WP_Query. Every theme uses WP_Query. The query is the symptom; the offender is one layer up.
Plugin developers further obscure the issue by registering callbacks via anonymous closures or via plugin classes loaded under namespaces that don't match the plugin folder name. So even when you know the hook, mapping \Acme\Featured\Engine\filter_query back to wp-content/plugins/super-featured-2/ is a separate hunt.
The result: every team that hits this problem ends up "bisecting plugins" — disabling them one at a time on staging — which is slow, often impossible on production, and frequently misleading because the slowness only appears under real data volume.
4. Cause
A perf.hook_timing signal is emitted when a single hook callback exceeds a configured duration threshold (typically 250ms or 500ms for admin, 100ms for frontend). The signal carries the hook name (pre_get_posts, init, wp_footer, save_post), the callback identifier (class + method, or function name, or closure file:line), and the wall-clock time the callback held the request.
This is the signal that closes the gap between "MySQL is slow" and "which line of which plugin caused it." When a callback registered on pre_get_posts runs a WP_Query with meta_query against wp_postmeta without an index, three things happen in the same request:
- The callback's wall time spikes — emitted as
perf.hook_timingwithhook=pre_get_posts,callback=Some_Plugin_Class::filter_query,duration_ms=4200. - The underlying SQL exceeds MySQL's
long_query_timeand is logged — emitted asdb.slow_querywith the rendered SQL and execution time. - The full HTTP request crosses the slow threshold — emitted as
http.requestwithduration_msmatching the sum of slow callbacks plus framework overhead.
Because all three signals share the same request_id, you don't have to guess which slow query belongs to which slow request. The hook callback gives you the human-readable culprit; the slow query gives you the exact SQL; the request gives you the URL the visitor hit.
If the request runs long enough to approach max_execution_time, PHP itself begins emitting warnings — an MWC backtrace, a Maximum execution time exceeded notice, or a mysqli::query(): MySQL server has gone away if the connection drops mid-query. Those land as php.warning, again with the same request_id, completing the picture.
5. Solution
5.1 Diagnose (logs first)
Start at the request, work down to the hook, then down to the SQL. The signals are designed to be read in that order.
Step 1: confirm there is a slow request.
grep "duration_ms" /var/log/nginx/access.log | awk '$NF+0 > 3000'
Or, if your logs are JSON:
jq 'select(.duration_ms > 3000) | {url, duration_ms, request_id}' /var/log/nginx/access.json
Each hit is an http.request signal. Note the request_id — every other signal in this section will be filtered by it.
Step 2: find the slow hook callbacks for that request.
WordPress doesn't write hook timing to disk by default. You need either Query Monitor (development), the New Relic PHP agent's WordPress hooks instrumentation, or — if you're running the Logystera WordPress plugin — the per-hook timing emitter, which logs every callback that exceeds the threshold:
grep "perf.hook_timing" /var/log/wordpress/logystera.log | grep "$REQUEST_ID"
A real entry looks like:
{"event":"perf.hook_timing","request_id":"r_8b2f","hook":"pre_get_posts",
"callback":"Yoast\\WP\\SEO\\Integrations\\Front_End\\Indexable_Search_Result_Front_End::filter_query",
"duration_ms":3840,"db_queries":1,"db_time_ms":3812}
This is the perf.hook_timing signal. It tells you the hook (pre_get_posts), the exact callback (Indexable_Search_Result_Front_End::filter_query), and that 3812ms of the 3840ms was spent inside MySQL — meaning this is a DB-bound callback, not a PHP-bound one.
Step 3: pull the SQL the callback ran.
If MySQL slow log is enabled (long_query_time = 1 in my.cnf):
sudo grep -B 2 -A 6 "Query_time" /var/log/mysql/mysql-slow.log | less
Or query MySQL directly for currently-slow patterns:
mysql -e "SELECT digest_text, count_star, avg_timer_wait/1e9 AS avg_ms
FROM performance_schema.events_statements_summary_by_digest
WHERE schema_name='wordpress'
ORDER BY avg_timer_wait DESC LIMIT 20;"
Each row corresponds to a db.slow_query signal. Match by SQL fingerprint — usually the wp_postmeta join shape or the wp_options WHERE autoload='yes' pattern is recognizable on sight.
Step 4: confirm timeout pressure.
If the request was long enough to threaten max_execution_time:
grep -E "Maximum execution time|MySQL server has gone away|memory size of" \
/var/log/php-fpm/error.log
Each match is a php.warning signal. Their presence means you're already losing requests, not just feeling slowness.
Step 5: map callback to plugin.
Once perf.hook_timing names the callback class, find the plugin:
grep -rln "class Indexable_Search_Result_Front_End" wp-content/plugins/
Or for a closure callback emitted as closure@/wp-content/plugins/foo/inc/filters.php:142, the path is in the signal payload itself.
You now have: URL → request_id → hook → callback class → plugin folder → SQL. That's the diagnostic chain end-to-end.
5.2 Root Causes
(see root causes inline in 5.3 Fix)
5.3 Fix
Slow queries on WordPress reduce to a small number of recurring patterns. Match the symptom to the pattern, not the other way around.
1. Unindexed meta_query on wp_postmeta (most common). A plugin filters pre_get_posts to add meta_query on a custom field. wp_postmeta has no index on meta_value for non-trivial values, so MySQL full-scans the table. Surfaces as perf.hook_timing on pre_get_posts with db_time_ms ≈ duration_ms, and a db.slow_query showing a wp_postmeta join. Fix: add a covering index, or restructure the field as a taxonomy term, or have the plugin precompute the lookup into a custom table.
2. Bloated wp_options autoload. A plugin writes large transients or serialized arrays with autoload='yes'. Every request loads the entire blob. Surfaces as perf.hook_timing on the very first hook (muplugins_loaded or plugins_loaded) with high duration_ms even on cache-cold requests, plus db.slow_query on SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'. Fix: SELECT option_name, LENGTH(option_value) FROM wp_options WHERE autoload='yes' ORDER BY 2 DESC LIMIT 20; — flip the largest offenders to autoload='no'.
3. Slow save_post or transition_post_status callback. A plugin synchronously calls an external API or rebuilds a sitemap on every save. Editors experience long admin saves. Surfaces as perf.hook_timing on save_post with db_time_ms low but duration_ms high — the callback is CPU- or HTTP-bound, not DB-bound. Often correlated with php.warning Maximum execution time exceeded. Fix: defer the work to WP-Cron or a background queue.
4. N+1 query inside the_loop or a widget. A theme or plugin runs one query per post in a list. Surfaces as perf.hook_timing on wp_footer or loop_end with extremely high db_queries count — 200+ queries on a single archive page. Fix: prime the cache with update_post_meta_cache() before the loop, or replace the inner query with a single WHERE post_id IN (...).
5. Missing object cache. If wp_using_ext_object_cache() returns false in production, every get_option, get_post_meta, and WP_Query re-hits MySQL. Surfaces as a flat increase in db.slow_query rate across all hooks, not localized to one callback. Fix: deploy Redis or Memcached and a drop-in object cache.
Roll back the offending plugin only as a last resort, and only after the signals point at it. "Disable everything" is not a fix; it's a confession.
5.4 Verify
Watch for the absence of perf.hook_timing on the offending hook+callback combination. Specifically:
grep "perf.hook_timing" /var/log/wordpress/logystera.log \
| grep "Indexable_Search_Result_Front_End::filter_query" \
| tail -100
A healthy state is: zero entries above your threshold for 30 minutes of representative traffic (including a cache-cold window — invalidate the page cache before measuring, otherwise you're testing Varnish, not WordPress).
Cross-check the supporting signals:
db.slow_queryrate for the SQL fingerprint should drop to zero. Runmysql -e "SELECT count_star FROM performance_schema.events_statements_summary_by_digest WHERE digest LIKE '%wp_postmeta%meta_value%';"before and after.http.requestp95 duration on the affected URL should return to baseline. If your baseline was 400ms and you were seeing 4000ms, healthy is back under 500ms.php.warningentries forMaximum execution timeshould stop appearing — even once a day is not acceptable, since it implies real users are hitting timeouts that the slow-query reduction did not eliminate.
Give it a full traffic cycle. WordPress object caches and OPcache mean the first ten minutes after a deploy are unrepresentative. Twelve to twenty-four hours of clean signals is the bar.
6. How to Catch This Early
Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.
This issue surfaces as perf.hook_timing.
A slow query that costs you a checkout conversion isn't a bug — it's a bug you didn't know about. WordPress will not warn you. The MySQL slow log will not tell you which plugin caused it. Site Health will say everything is fine. By the time the support ticket arrives, the regression has been live for a week and the offending plugin update was three releases ago.
This is what perf.hook_timing is for. It surfaces the exact hook and callback the moment a callback crosses your threshold, with the request URL and the SQL it ran already attached. Logystera continuously evaluates perf.hook_timing against per-entity baselines and alerts when a callback's p95 duration regresses — meaning you find out a plugin update made pre_get_posts slow before your editors do, and certainly before Google's crawler does.
The same pipeline correlates perf.hook_timing with db.slow_query, slow http.request, and php.warning against max_execution_time. You don't read four separate logs. You read one signal stream that already knows which slow query belongs to which slow request belongs to which slow callback.
The cost of not having this isn't paying for monitoring. It's bisecting twelve plugins on staging at 11pm while your editor team waits.
7. Related Silent Failures
- wp.cron / missed_schedule — Slow
save_postcallbacks block WP-Cron events queued behind them, leading to scheduled posts that never publish and email digests that never send. - php.fatal / memory_exhausted — N+1 query patterns combined with large
wp_optionsautoload routinely push pastWP_MEMORY_LIMIT, crashing requests withAllowed memory size exhaustedrather than just slowing them. - http.request 502/504 — When PHP-FPM workers stay tied up in slow queries, the worker pool fills and new requests queue at the upstream. Surfaces as nginx 502s without any PHP error, which is why the symptom looks like an infrastructure problem.
- db.connection_lost — Long-running queries that exceed
wait_timeouttriggerMySQL server has gone away, often mid-checkout, leaving orphaned cart sessions. - perf.autoload_size — A bloated
wp_optionsautoload doesn't always surface as a slow query; sometimes it's a flat 200ms tax on every request that erodes p50 latency invisibly until a plugin update tips it over.
See what's actually happening in your WordPress system
Connect your site. Logystera starts monitoring within minutes.