Guide
WordPress database deadlocks — causes and detection
1. Problem
You open your WordPress error log after a customer complaint and see this:
WordPress database error Deadlock found when trying to get lock; try restarting transaction
for query UPDATE wp_options SET option_value = ... WHERE option_name = '_transient_woocommerce_cache_excluded_uris'
made by do_action('woocommerce_checkout_order_processed')
Or this, which is worse:
WordPress database error Lock wait timeout exceeded; try restarting transaction
The site looks alive. The homepage loads. But specific actions — checkout, bulk post updates, plugin background tasks — return 500s, white screens, or half-saved orders. Refreshing sometimes works, sometimes doesn't. This is the classic shape of a wordpress mysql deadlock: intermittent, transactional, invisible to uptime monitors, and absolutely capable of dropping orders on the floor.
This guide is about the db.error signal Logystera emits when MySQL/MariaDB rejects a transaction with 1213 Deadlock found or 1205 Lock wait timeout exceeded, why WordPress doesn't surface it anywhere a human will see it, and how to find the actual transaction pair that's colliding.
2. Impact
A deadlock is not a slow query. It's a transaction that did not happen. MySQL picks one of the two conflicting transactions, rolls it back, and returns error 1213 to the client.
WordPress's wpdb layer catches the error, writes it to debug.log (if WP_DEBUG_LOG is on), and returns false from the failing $wpdb->query() call. Most plugins do not check that return value. This means:
- A WooCommerce order can be created with
wp_postsrows but nowp_postmetaline items. - A scheduled
wp-cronjob can mark itself "complete" while its underlying state update was rolled back. - An ACF field group save can leave half its keys persisted and half missing.
- A bulk product import can silently skip 12 rows out of 800 with no error shown to the admin.
Revenue-side: dropped orders, double-charged customers when a retry succeeds but the original update partially landed, support tickets you can't reproduce. Operations-side: data integrity drift you'll discover months later during reconciliation. Security-side: deadlocks during user role updates can leave permissions in an inconsistent state — a user mid-promotion who ends up with neither old nor new caps.
The site is not down. It is lying about success.
3. Why It’s Hard to Spot
WordPress was designed when shared hosting and mysql_query() were normal. Its database error handling reflects that era:
wpdbswallows the error.$wpdb->query()returnsfalse. There is no exception thrown by default. The PHP execution continues.debug.logis off in production. Most WordPress sites haveWP_DEBUG = false, which means the deadlock string is never written to disk at all. It exists only in MySQL's internal counters.- No admin notice. WordPress core does not surface DB errors in
wp-admin. There is no dashboard widget that says "12 deadlocks in the last hour." - Uptime checks pass. Pingdom, UptimeRobot, and StatusCake hit
/or/wp-login.php. Those pages don't run conflicting transactions. A deadlock storm on the WooCommerce REST API will not register. - Errors are intermittent by definition. Deadlocks require concurrency. They appear under load, vanish in staging, and refuse to reproduce when you're watching.
- The 5xx is generic. If the deadlock cascades into a fatal, the user sees a generic 500. The Apache/nginx access log shows
POST /checkout/ 500with no clue this was a DB conflict.
The result: silent revenue loss. The signal exists in MySQL and in wpdb, but nothing routes it anywhere a human will see it before damage compounds.
4. Cause
A db.error signal in Logystera is emitted when the WordPress plugin captures a MySQL error returned to wpdb. The two error codes that matter for this guide:
- MySQL error 1213 —
Deadlock found when trying to get lock; try restarting transaction - MySQL error 1205 —
Lock wait timeout exceeded; try restarting transaction
Both mean the InnoDB lock manager refused to grant a row, gap, or next-key lock because doing so would create a wait cycle (1213) or because waiting exceeded innodb_lock_wait_timeout (default 50s) (1205).
Inside the database, the mechanism is concrete: two transactions hold locks the other one needs.
T1: UPDATE wp_options SET option_value=... WHERE option_name='cron' -- holds row lock A
T2: UPDATE wp_options SET option_value=... WHERE option_name='active_plugins' -- holds row lock B
T1: tries to UPDATE option_name='active_plugins' -- waits on B
T2: tries to UPDATE option_name='cron' -- waits on A
^ cycle detected → 1213, T2 rolled back
The signal payload includes the failing SQL statement, the originating PHP function (when stack capture is enabled), the request URI, and the elapsed transaction time. That's the diagnostic skeleton you need — without it, you have a hash collision of WordPress's silent failure modes.
When the surrounding PHP can't recover (uncaught wpdb exception, downstream code that dereferences a false return as an object), the same request also produces a php.fatal. When the transaction was running inside an action hook, a perf.hook_timing signal will show the hook execution time spike. When the deadlock happens during an HTTP request, an http.request 5xx is correlated by request ID.
5. Solution
5.1 Diagnose (logs first)
You need three log sources in this order: MySQL's own error log, the PHP/wpdb error log, and InnoDB's lock state at the moment of the deadlock.
1. Find the db.error signal in WordPress logs
If WP_DEBUG_LOG is on, every deadlock is written verbatim:
grep -E "Deadlock found|Lock wait timeout" /var/www/html/wp-content/debug.log
Each match is one db.error signal. Count them per hour:
grep "Deadlock found" /var/www/html/wp-content/debug.log \
| awk '{print $1, $2}' | cut -c1-13 | sort | uniq -c
If you see clusters (e.g. 40 deadlocks between 14:00 and 14:05, then quiet), correlate against your traffic — that's almost always a cron burst, a checkout spike, or a plugin's scheduled task.
2. Cross-reference with PHP-FPM error log
The same signal appears in PHP-FPM when wpdb writes to error_log:
grep -E "WordPress database error (Deadlock|Lock wait)" /var/log/php-fpm/www-error.log
This file usually has the calling stack appended (the made by line), which tells you which action hook or plugin triggered the failing transaction. That's your suspect list.
3. Pull the actual deadlock pair from InnoDB
This is the load-bearing step. MySQL records the last deadlock in detail:
mysql -e "SHOW ENGINE INNODB STATUS\G" | sed -n '/LATEST DETECTED DEADLOCK/,/WE ROLL BACK/p'
The output names both transactions, the exact SQL each was running, the lock each was holding, and the lock each was waiting on. Without this, you are guessing which two queries collided.
For continuous capture, enable innodb_print_all_deadlocks = ON so every deadlock — not just the latest — is written to MySQL's error log:
grep -A 50 "TRANSACTION" /var/log/mysql/error.log | grep -B 2 -A 40 "Deadlock"
4. Correlate with hook timing
WordPress hooks that run long DB-bound work are deadlock magnets. The perf.hook_timing signal shows execution time per hook. Search for the offenders:
grep "perf.hook_timing" /var/www/html/wp-content/debug.log \
| awk '$NF > 5000' | sort -k NF -n | tail -20
Hooks like woocommerce_checkout_order_processed, save_post, wp_insert_user, and any plugin's *_cron_hook consistently appear at the top during deadlock events.
5. Tie it to the HTTP request
Each db.error shares a request ID with the surrounding http.request signal. If you're not using request IDs, add one via wp_unique_id() in a muplugins/request-id.php and log it. Then:
grep "req_id=abc123" /var/log/php-fpm/www-error.log
You'll see the deadlock, the PHP fatal it triggered (if any), and the resulting 500 in one timeline.
The grep-to-signal map for this section:
grep "Deadlock found" debug.log→db.error(MySQL 1213)grep "Lock wait timeout" debug.log→db.error(MySQL 1205)grep "Fatal error" php-fpm error logcorrelated by request ID →php.fataldownstream ofdb.errorgrep "perf.hook_timing"filtered by duration → long-running hooks producing the deadlock-prone transactionstailof nginx/apache access log filtered to 5xx →http.request5xx correlated by time
5.2 Root Causes
(see root causes inline in 5.3 Fix)
5.3 Fix
Rank the fixes by what SHOW ENGINE INNODB STATUS actually shows.
Cause 1: Two writers updating the same row in different orders
Most common. Two cron jobs, two REST endpoints, or one cron + one webhook update the same wp_options row (often cron, active_plugins, or a _transient_* row) in opposite order to other rows.
- Signal shape:
db.error1213, both transactions touchingwp_options, both originating fromdo_action('init')or a plugin cron hook. - Fix: Move heavy
update_option()calls out ofinit. Use object cache for transients (wp_cache_set), not the options table. Stop storing rotating data inwp_options—_transient_*for high-churn keys is an antipattern; use a custom table or Redis.
Cause 2: Long-running transaction blocking short ones
A plugin opens a transaction, runs a long PHP loop, then commits. Other writes pile up behind it. They time out with 1205.
- Signal shape:
db.error1205 ("Lock wait timeout"), often paired with aperf.hook_timingsignal where one hook is >30s.php.fatalmay follow if PHP itself times out. - Fix: Break the long transaction into batches. Replace
START TRANSACTION; ...thousand updates...; COMMIT;with chunked commits every 100–500 rows. For imports, useWP-CLIwith--batch-size, not a one-shot admin AJAX call.
Cause 3: Bulk operations during checkout
WooCommerce checkout, ACF saves, or membership-plugin role grants run multi-row updates inside an implicit transaction. A bulk product update from the admin running concurrently will collide with checkout writes on wp_postmeta.
- Signal shape:
db.error1213 with both transactions touchingwp_postmetaorwp_woocommerce_order_itemmeta. Frequently correlates withhttp.request5xx on/checkout/or/wc-api/*. - Fix: Schedule bulk admin operations off-hours. Use Action Scheduler (which serializes) instead of synchronous bulk processing. For high-traffic stores, enable WooCommerce's HPOS (high-performance order storage) — it uses dedicated tables with better lock granularity.
Cause 4: Missing or wrong index causing range locks
Without an index, InnoDB locks ranges much wider than the rows you're touching. Two unrelated updates can collide on gap locks.
- Signal shape:
db.error1213 where the SQL has aWHEREclause without a usable index.EXPLAINshowstype: ALLorUsing where. - Fix: Run
EXPLAINon the failing SQL. Add the missing index. Forwp_postmeta, the standardmeta_keyindex is usually enough; if a plugin queries bymeta_value, add a composite index on(meta_key, meta_value(191)).
Cause 5: innodb_lock_wait_timeout too aggressive
Default is 50s. On busy sites with legitimate long writes, this is too short.
- Signal shape:
db.error1205 in steady state, not 1213. Lock wait timeouts dominate; deadlocks are rare. - Fix: Raising the timeout is a band-aid, not a fix. First confirm via
SHOW ENGINE INNODB STATUSthat the blocked transactions are legitimate. Only then consider raising to 120s. If you're tempted to raise above that, you have a long-transaction problem (see Cause 2), not a timeout problem.
5.4 Verify
You verify against the absence of db.error 1213/1205 signals under representative load.
- Reproduce the original load pattern. If the deadlocks happened during checkout bursts, run a synthetic burst (e.g.
siegeagainst a checkout endpoint with valid sessions) or wait through the same time-of-day window.
- Watch the signal disappear in real time:
tail -f /var/www/html/wp-content/debug.log | grep -E "Deadlock found|Lock wait timeout"
Healthy: zero matches across a 30-minute window covering a peak traffic period.
- Confirm at the database layer:
mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_row_lock%';"
Innodb_row_lock_current_waits should be 0 most of the time. Innodb_row_lock_time_avg should be under a few milliseconds.
- Check the deadlock counter is no longer climbing:
mysql -e "SHOW ENGINE INNODB STATUS\G" | grep -A 1 "DEADLOCK"
The LATEST DETECTED DEADLOCK block should be older than your fix deployment.
- Confirm correlated signals are quiet: no new
php.fatalwith the request IDs of completed transactions, nohttp.request5xx on the previously failing endpoints,perf.hook_timingfor the suspect hook back to baseline.
If all five conditions hold for 30 minutes under normal-to-peak traffic, the deadlock pair is resolved. If db.error 1213 reappears at lower frequency, you have a second deadlock pair — re-run section 5 against the new SHOW ENGINE INNODB STATUS output.
6. How to Catch This Early
Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.
This issue surfaces as db.error.
The hard truth about deadlocks: the fix is rarely the difficult part. The difficult part is knowing they're happening at all.
Without instrumentation, the symptom you'll notice is not the deadlock — it's the downstream consequence. A customer emails you. An order reconciliation fails three weeks later. An admin says "I saved that, why is it blank?" By then the SHOW ENGINE INNODB STATUS buffer has rolled over and the evidence is gone.
WordPress does not alert on db.error. Neither does your hosting panel. Neither does your uptime monitor. The signal exists in two places: a debug log most production sites don't write, and a MySQL counter nobody scrapes.
Logystera detects and alerts on db.error (MySQL 1213/1205) at the moment it happens, with the failing SQL, the originating hook, the request ID, and the correlated php.fatal / http.request 5xx in the same timeline. You see deadlock storms within seconds of their first occurrence, with the transaction pair already extracted — not three weeks later, in a customer complaint.
That is the difference between treating deadlocks as a recurring mystery and treating them as a tractable engineering problem.
7. Related Silent Failures
php.fatalafterdb.error— whenwpdbreturnsfalseand downstream PHP dereferences it as an object. Same root cause, different surface signal.http.request5xx with no PHP error — the deadlock was caught and logged but the failing endpoint returned a 500 with an empty body. Correlated by request ID.perf.hook_timingspikes onsave_post,woocommerce_checkout_order_processed,wp_insert_user— long DB-bound hooks are deadlock vectors. Watch them as a leading indicator.wp.cron missed_schedule— when a cron job's transaction is rolled back, the schedule may not advance, producing repeated retries of the same failing job.wp.state_changeintegrity drift — partial writes from rolled-back transactions show up as inconsistent option values, half-updated user roles, or orphaned post meta. Often the first non-log evidence that deadlocks have been silently dropping writes for weeks.
See what's actually happening in your WordPress system
Connect your site. Logystera starts monitoring within minutes.