Guide
WordPress database disk full — detecting the silent failure before queries start failing
1. Problem
Your WordPress site is half-broken. The homepage still loads, but WP Admin throws a white error box, comments fail to save, WooCommerce checkouts stall at "Processing order…", and contact form entries vanish. The PHP error log is full of lines like:
WordPress database error: Got error 28 from storage engine for query INSERT INTO wp_options ...
WordPress database error: The table 'wp_options' is full for query INSERT INTO ...
MySQL error 1021: Disk full (/var/lib/mysql/...); waiting for someone to free some space...
The user-facing symptom is "wordpress database disk full error" — and that's exactly what users are Googling at 3 a.m. while the order queue silently rots. Some pages load from the page cache. Some 500 because they hit paths that write transients. Authenticated requests fail differently depending on whether the query is a SELECT (still works) or an INSERT/UPDATE (rejected).
This usually surfaces as a db_disk_full signal in your PHP error log. The Logystera WordPress plugin's database event subscriber explicitly catches MySQL error 28, 1021, and 1041 — exactly the disk-full and tablespace-full conditions — and fires db_disk_full independently of wpdb->last_error. But unless you're looking, you'll only see it after the order queue has been dead for an hour.
2. Impact
A WordPress database that can't INSERT is a silent data shredder. Reads still work, so the homepage and CDN-cached output look healthy to an uptime monitor. But every write path is dead: new orders, new comments, form submissions, user registrations, password resets, transient cache writes, and update_option() calls all fail.
For a WooCommerce store doing $5k/day, a four-hour outage during peak traffic is roughly $800–$1,200 in failed checkouts — and the worse number is the one you find a week later: customers whose cards were charged via Stripe but whose wc_order row never persisted, because Stripe's webhook tried to INSERT into a full disk. That's chargeback territory.
For a publisher running a membership site, every paywall token issued during the outage is invalid by the time the disk clears, because the wp_options row tying the token to the user was never written. Subscribers see "your link has expired" and bounce.
The quieter cost: WordPress core suppresses most database write errors in production. wpdb->insert() returns false, but if the calling plugin doesn't check the return value (most don't), the request 200s with no visible error. wp_db_errors_total climbs while http.request stays clean — exactly the failure pattern silent-failure monitoring exists for.
3. Why It’s Hard to Spot
WordPress's failure mode here is uniquely deceptive. The disk doesn't fill up all at once — it creeps. wp_options accumulates orphaned transients, wp_postmeta collects unused meta from deleted posts, wp_comments fills with spam, and one day a single INSERT exceeds available bytes in the InnoDB tablespace.
Three things compound to make this invisible:
- Reads keep working. MySQL error 28 fires only on writes. The homepage cache stays warm, archive pages render from existing rows, an HTTP-200 uptime check is happy. Your status page is green.
- WordPress swallows write failures.
wpdb->insert()returnsfalseinstead of throwing. Plugins that don't check the return value continue silently. The error appears in the PHP error log ifWP_DEBUG_LOGis on — most production sites have it off. - Hosting dashboards measure the wrong disk. cPanel, Plesk, and managed-WP dashboards usually report total filesystem usage. They miss the case where MySQL's data partition is full but the document root has 50 GB free, or where the InnoDB tablespace hit
innodb_data_file_pathautoextend limits before the OS disk filled.
The result: visitors load cached pages, monitors see 200s, but every form submission and checkout fails in a way the user can't diagnose. The truth lives in PHP's error log and MySQL's error log, neither of which a typical hosting dashboard surfaces.
4. Cause
WordPress's wpdb class wraps every query and inspects the MySQL driver's error code on completion. When MySQL refuses a write because the storage layer is full, it returns one of three error classes:
- Error 28 —
No space left on device(OS-level: the partition holding/var/lib/mysqlor the binlog directory has zero free bytes). - Error 1021 —
Disk full; waiting for someone to free some space...(InnoDB-level: the tablespace can't extend, often becauseinnodb_data_file_pathhas a fixed cap). - Error 1041 —
Out of memory; restart server and try again(correlated when the disk-full state forces InnoDB to thrash).
The Logystera WordPress plugin registers a database error subscriber. When the captured last_error matches /error 28|disk full|table.*is full|no space left/i, it emits a db_disk_full signal directly to the gateway — independent of WordPress's own logging — with the MySQL error code, the offending table name (extracted from the query), and the operation type (INSERT/UPDATE/ALTER).
This signal bypasses every layer that normally hides the failure: WordPress's silent false return, the hosting dashboard's coarse health check, and the page cache still serving 200s. db_disk_full exists precisely to surface a failure mode that has no other visible symptom until the support tickets arrive.
5. Solution
5.1 Diagnose (logs first)
Confirm a write is actually failing, identify which disk-full class (OS vs tablespace), and pinpoint the table that ate the space.
1. PHP error log — confirm WordPress saw the disk-full error.
tail -n 1000 /var/log/php-fpm/error.log /var/www/wp-content/debug.log 2>/dev/null \
| grep -iE "error 28|disk full|table.*is full|no space left|errno: 1021|errno: 1041"
The line you want is the literal string WordPress writes when wpdb catches the error — what produces db_disk_full in the Logystera plugin:
WordPress database error Got error 28 from storage engine for query
INSERT INTO `wp_options` (`option_name`, `option_value`, `autoload`) VALUES (...)
error 28 = OS-level. error 1021 or The table is full = InnoDB tablespace. Fix differs.
2. MySQL error log — confirm the daemon's view.
grep -iE "disk full|innodb.*full|out of memory|os error 28" \
/var/log/mysql/error.log /var/log/mysqld.log 2>/dev/null | tail -n 20
If Logystera shows wp_db_errors_total spiking but MySQL's log is empty, check SHOW VARIABLES LIKE 'log_error'; — you're tailing the wrong file.
3. Check both disk layers.
df -h /var/lib/mysql
mysql -e "SELECT table_schema AS db,
ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) AS size_mb
FROM information_schema.tables GROUP BY table_schema
ORDER BY size_mb DESC LIMIT 10;"
df at 100% = OS-level disk-full (error 28). Partition has space but one table looks suspicious — usually wp_options, wp_postmeta, wp_comments, or wp_actionscheduler_logs — = tablespace bloat producing 1021.
4. Find the bloated table — the smoking gun.
The 90% case is wp_options autoload bloat. One misbehaving plugin writes thousands of _transient_* rows with autoload = yes, and every page load reads all of them.
# Top 20 largest options rows, sorted by size
mysql wordpress_db -e "
SELECT option_name, LENGTH(option_value) AS bytes, autoload
FROM wp_options ORDER BY bytes DESC LIMIT 20;"
# Total autoloaded data — should be under 1 MB on a healthy site
mysql wordpress_db -e "
SELECT ROUND(SUM(LENGTH(option_value)) / 1024 / 1024, 2) AS autoload_mb
FROM wp_options WHERE autoload = 'yes';"
A healthy site has under 1 MB of autoloaded wp_options data. 50 MB+ or 100,000+ transient rows = you found the cause.
5. Time-correlate with what changed.
db_disk_full rarely appears out of nowhere. It clusters around: a recent plugin install, a stuck wp-cron that stopped running garbage collection, a backup plugin that failed to clean up temp tables, or an importer that added millions of postmeta rows.
# What plugins were recently activated or updated?
ls -lt /var/www/wp/wp-content/plugins/ | head -20
# When did wp-cron last run? Stale cron = stale transients = bloat
wp cron event list --path=/var/www/wp | head -20
If wp_actionscheduler_logs has tens of millions of rows, the Action Scheduler garbage collector hasn't run — usually because wp-cron.php is broken or disabled. That correlation turns "the disk filled up" into "the disk filled up because Action Scheduler stopped pruning at 02:14, immediately after WP_CRON was disabled in wp-config.php."
5.2 Root Causes
Each cause maps to a specific signal pattern and a specific fix. Prioritized by frequency on real WordPress sites.
wp_optionsautoload bloat — A plugin writes large rows withautoload = yes. Every page load reads them, and the table grows without bound. Producesdb_disk_fullwithINSERT INTO wp_optionsin the query payload, andwp_db_query_countclimbs because every request scans more autoload rows. MySQL error log shows error 1021 first (tablespace), error 28 second (OS).- Action Scheduler / WooCommerce log bloat —
wp_actionscheduler_logsandwp_actionscheduler_actionsaccumulate millions of rows when the scheduler runs but the GC doesn't. Producesdb_disk_fullwith table namewp_actionscheduler_*in the offending query. wp_postmetaorphans from deleted posts — Postmeta rows aren't cascaded when posts are deleted via direct SQL. Common after a content migration. Producesdb_disk_fullonINSERT INTO wp_postmeta, with row count wildly out of proportion towp_posts.- Spam comments and
wp_commentmeta— Akismet catches them but doesn't always purge them. Producesdb_disk_fullonINSERT INTO wp_commentswith a tablespace 1021 error. - Binlog accumulation — MySQL
log_binfiles were never rotated. The binlog directory fills the partition before table data does. Producesdb_disk_fullwith error 28 (OS-level), anddfshows/var/log/mysql/near 100%. - InnoDB tablespace fixed-size cap —
innodb_data_file_path = ibdata1:10M:autoextend:max:10Ghit its 10G cap. Producesdb_disk_fullwith error 1021 while the OS disk has plenty of space — the most confusing case.
5.3 Fix
Match the fix to what db_disk_full told you. Don't guess.
Cause A — wp_options autoload bloat: demote bloated options from autoload, purge stale transients.
-- Demote bloated option (keeps the data)
UPDATE wp_options SET autoload = 'no' WHERE option_name = 'plugin_xyz_huge_cache';
-- Purge stale transients (safe — they regenerate)
DELETE FROM wp_options WHERE option_name LIKE '\_transient\_%';
DELETE FROM wp_options WHERE option_name LIKE '\_site\_transient\_%';
OPTIMIZE TABLE wp_options;
Without OPTIMIZE, the rows are gone but the tablespace file stays the same size.
Cause B — Action Scheduler bloat: prune the log table directly. Don't run this mid-business-hours on a WooCommerce store; it locks the table.
DELETE FROM wp_actionscheduler_logs
WHERE log_date_gmt < DATE_SUB(NOW(), INTERVAL 14 DAY);
DELETE FROM wp_actionscheduler_actions
WHERE status IN ('complete','failed','canceled')
AND last_attempt_gmt < DATE_SUB(NOW(), INTERVAL 14 DAY);
OPTIMIZE TABLE wp_actionscheduler_logs;
Cause C — Post revisions and orphan postmeta: prune revisions and cascade-delete orphans.
DELETE FROM wp_posts WHERE post_type = 'revision'
AND post_modified < DATE_SUB(NOW(), INTERVAL 90 DAY);
DELETE pm FROM wp_postmeta pm
LEFT JOIN wp_posts p ON pm.post_id = p.ID WHERE p.ID IS NULL;
Add define('WP_POST_REVISIONS', 5); to wp-config.php to cap future revision count.
Cause D — Spam comments:
DELETE FROM wp_comments WHERE comment_approved IN ('spam','trash');
DELETE cm FROM wp_commentmeta cm
LEFT JOIN wp_comments c ON cm.comment_id = c.comment_ID WHERE c.comment_ID IS NULL;
Cause E — Binlog accumulation:
PURGE BINARY LOGS BEFORE DATE_SUB(NOW(), INTERVAL 7 DAY);
-- Or set expire_logs_days = 7 in my.cnf for permanent rotation
Cause F — InnoDB tablespace cap hit: raise the ceiling in my.cnf (innodb_data_file_path = ibdata1:10M:autoextend:max:50G). Better long-term: switch to innodb_file_per_table = ON so each table grows in its own .ibd file. Requires a MySQL restart.
5.4 Verify
You're looking for two things to hold simultaneously: db_disk_full events stop appearing, and wp_db_errors_total returns to baseline.
# Should be empty for at least 30 minutes under normal traffic:
grep -iE "error 28|disk full|errno: 1021" /var/log/php-fpm/error.log | tail -n 5
# A healthy site has minimal autoload weight (under 1 MB):
mysql wordpress_db -e "
SELECT ROUND(SUM(LENGTH(option_value))/1024/1024, 2) AS autoload_mb
FROM wp_options WHERE autoload = 'yes';"
# Test a real write path end-to-end:
wp post create --post_title="diagnostic" --post_status=draft --path=/var/www/wp
In Logystera's entity view, healthy state for a WordPress site looks like: zero db_disk_full events for 60 minutes, wp_db_errors_total at its normal 0–1/hour baseline (occasional duplicate-key warnings on plugin installs are expected), and wp_request_peak_memory_mb not climbing — which would suggest queries are getting heavier as the autoload payload grows back.
The baseline matters: a healthy production WordPress site emits 0 db_disk_full per day. Unlike PHP deprecation warnings, this signal has no expected baseline noise. Any non-zero rate over a 5-minute window is anomalous. If db_disk_full stays silent for an hour under your normal traffic peak — including the next wp-cron window — the issue is resolved.
If db_disk_full reappears within a day, you addressed the symptom (deleted rows) but not the cause (whatever plugin keeps writing them). Go back to step 4 of §5.1 and find the offender by option_name prefix.
6. How to Catch This Early
Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.
This issue surfaces as db_disk_full.
Everything you just did manually — grep for "error 28," cross-reference df and information_schema.tables, find the bloated wp_options row, time-correlate with the last plugin install — Logystera does automatically. The WordPress plugin's database error subscriber emits db_disk_full to the gateway out-of-band the instant a write fails with error 28, 1021, or 1041 — independent of WP_DEBUG_LOG, which most production sites have disabled.
!Logystera dashboard — db_disk_full over time db_disk_full rate, last 24h — first event at 23:47 UTC, immediately after Action Scheduler stopped pruning the wp_actionscheduler_logs table.
Right now db_disk_full is a raw signal — it's emitted by the plugin but no metric or rule wraps it yet. That means most users never see it on a dashboard, even though their site has been firing it. Adding a metric definition that maps event_type=db_disk_full to wp_db_disk_full_total, plus a rule with threshold: 1 event in 60 seconds and severity: critical, takes the signal from "hidden in the agent" to "alerts in 60 seconds" — the canonical fix for a detection gap that costs WooCommerce stores real revenue.
!Logystera alert — WordPress database disk full Critical alert fires within 60s of the first db_disk_full event, including MySQL error code (28 vs 1021) and the offending table.
The alert payload includes the timestamp, the MySQL error number (so you know "OS-level" vs "tablespace-level" before you open a terminal), the affected entity name, the offending table extracted from the query, and the operation (INSERT/UPDATE/ALTER). That's enough to pick the right root cause from §5.2, from the alert body alone.
The fix is simple once you know the problem. The hard part is knowing it happened at all. Logystera turns this from a Monday-morning "why didn't my orders go through?" support ticket into a 60-second notification with the MySQL error code that proves it.
7. Related Silent Failures
db.deadlock— different DB failure mode, also surfaces only in writes. Correlated withdb_disk_fullbecause both spike during heavy-write windows (imports, batch jobs, traffic peaks).db.connection_failed— DB unreachable rather than full. Same blank-page outcome, completely different fix path (network/auth vs storage).wp.cron.missed_schedule— upstream cause of half of alldb_disk_fullincidents: when wp-cron stops, garbage collectors stop, and tables grow without bound.wp_request_peak_memory_mbclimbing — leading indicator. Aswp_optionsautoload bloats, every request loads more data into PHP memory; peak memory creeps up days before the disk fills.wpdb.last_errorsilently swallowed — broader class of WordPress-specific silent DB failures.db_disk_fullis the loudest member; duplicate-key INSERTs, foreign-key constraints, and lock-wait timeouts hide just as well behindwpdb->insert()returningfalse.
See what's actually happening in your WordPress system
Connect your site. Logystera starts monitoring within minutes.