Guide
WordPress memory_limit exhausted — how to detect it before it crashes the site
1. Problem
Your WordPress admin shows a half-rendered page. The frontend either prints a blank white screen or this exact line:
Fatal error: Allowed memory size of 268435456 bytes exhausted
(tried to allocate 20480 bytes) in /var/www/html/wp-includes/...
Sometimes it is even more confusing: the homepage loads fine, but /wp-admin/plugins.php dies. Or the REST API returns 500 only on POST. Or the editor saves a draft and then a "There has been a critical error" banner appears, with a generic "check your site health" link that explains nothing.
You searched "wordpress memory exhausted error", "wordpress allowed memory size", or "wordpress memory_limit increase". You are in the right place. This guide will show you how to spot the problem before it becomes a fatal — by treating PHP memory pressure as a signal, not an accident.
The thing to know up front: by the time PHP prints "Allowed memory size exhausted", the request is already dead. The crash is the late symptom. Real detection happens earlier, in the request that allocated 92% of memory_limit and survived. Logs see that. WordPress does not.
2. Impact
A single memory exhaustion can take the entire site down for the duration of a request, and a recurring one will brick a workflow:
- Checkout dies on cart totals over a certain size. WooCommerce hits the limit recalculating coupons or shipping, and orders silently fail.
- Backups stop completing. UpdraftPlus, BackWPup, and Duplicator routinely consume 200–400 MB; a 256 MB limit means the last full backup is older than you think.
- Imports and migrations corrupt. WP All Import, WPML translation sync, or a media regeneration job will exhaust mid-batch and leave the database half-updated.
- wp-cron stalls cascade. A scheduled task crashes, WordPress retries it on the next page load, the next page load also crashes, and email, post publishing, and license checks all back up behind it.
- Editors lose work. The Gutenberg autosave POST hits memory_limit and returns 500. The user sees "Updating failed" and assumes the network glitched.
The financial story is simple: every memory-exhausted request is a 500. Every 500 on a logged-in admin is a productivity hit. Every 500 on /checkout/ is lost revenue. And because PHP-FPM recycles the worker after a fatal, the next request on that worker is fine — which is exactly why it feels random.
3. Why It’s Hard to Spot
WordPress hides this failure mode for three reasons.
First, the PHP-FPM worker dies and is replaced. The fatal lives in php-fpm.log for one line, sometimes truncated. The next request on the same pool gets a fresh worker with full memory headroom and runs fine. There is no banner, no admin notice, no widget. Site Health shows green.
Second, uptime monitors cannot see it. Pingdom, UptimeRobot, and StatusCake hit / over HTTPS. The homepage is cached, lightweight, and almost never the URI that exhausts memory. The crashing URI is /wp-admin/post.php?action=edit, /?wc-ajax=update_order_review, or /wp-cron.php. From outside, the site is "up" 100% of the time while the editor is unusable.
Third, WordPress itself rewrites the symptom. A fatal during admin renders the "There has been a critical error on this website" page introduced in WP 5.2. The actual error message is hidden behind the recovery email link, which is often misconfigured or sent to a mailbox no one reads. The site owner sees "critical error", refreshes, gets a working page, and assumes it self-healed.
The result: memory exhaustion is one of the most common WordPress crashes, and it is also one of the least visible. The only place it leaves a clean trail is the logs.
4. Cause
PHP allocates memory inside a single request up to ini_get('memory_limit'). WordPress layers its own concept on top with WP_MEMORY_LIMIT (frontend) and WP_MAX_MEMORY_LIMIT (admin), which can raise — but never lower — the PHP value at runtime via ini_set. When usage approaches the ceiling, the kernel does not warn you. PHP just keeps allocating until the next emalloc() call cannot satisfy the request. Then it raises a E_ERROR and the process dies.
Logystera treats the approach to that ceiling as its own event. The Logystera WordPress plugin samples memory_get_peak_usage(true) per request and emits a memory_near_limit signal whenever peak usage crosses a threshold (default 80% of memory_limit). The signal carries:
peak_mb— the peak resident usage for this requestlimit_mb— the effectivememory_limitthe request ran underratio—peak_mb / limit_mbrequest_uri,request_method,is_admin,is_cronactive_plugins_hash— so you can correlate spikes with plugin changes
memory_near_limit is the leading indicator. php.fatal with the message Allowed memory size of N bytes exhausted is the trailing indicator — the moment a request actually crossed the line.
There is also wp.environment, an hourly snapshot signal that captures both memory_limit_mb (what PHP sees) and wp_memory_limit_mb (what WordPress thinks it has). The two should match within tolerance. When they don't — for instance, hosting cap is 256 MB but wp-config.php requests 512 MB — every request runs under the lower number, and memory_near_limit will start firing on workloads that worked last week.
This is the failure mechanism, exactly: memory_near_limit rising → memory_near_limit clustering on specific URIs → one of those URIs tips over → php.fatal "Allowed memory size exhausted" → 500 to the user.
5. Solution
5.1 Diagnose (logs first)
Three log surfaces matter. Read them in this order.
a) PHP-FPM error log. This is where the actual fatal lands.
Common paths:
/var/log/php-fpm/error.log
/var/log/php8.2-fpm.log
/var/log/php/error.log
Find every memory exhaustion in the last hour:
grep -E "Allowed memory size of [0-9]+ bytes exhausted" /var/log/php-fpm/error.log \
| tail -200
Each match is a confirmed php.fatal signal with subtype memory exhaustion. The line tells you the limit (268435456 bytes = 256 MB) and the file/line where the last allocation was attempted — usually deep inside a plugin or wp-includes/class-wp-hook.php.
To group by URI you need PHP-FPM's access log (access.log directive in the pool config). If you have it, correlate timestamps:
grep "Allowed memory size" /var/log/php-fpm/error.log \
| awk '{print $1, $2}' \
| while read d t; do
grep "$d $t" /var/log/php-fpm/access.log
done
b) Web server log. Memory fatals manifest as 500s. Find them:
grep ' 500 ' /var/log/nginx/access.log \
| awk '{print $7}' \
| sort | uniq -c | sort -rn | head -20
The top entries are your candidate URIs. If /wp-admin/admin-ajax.php?action=heartbeat is on the list, your editor is dying. If /wp-cron.php is on it, your scheduled tasks are crashing. These are the URIs that produced php.fatal — and the same URIs are the ones that emit memory_near_limit upstream.
c) Application-side, before the fatal. Most memory crashes are not sudden. The same request is using 220 MB on Monday and 270 MB on Friday. Standard PHP logs cannot see that — the request succeeded, so nothing got logged.
The Logystera WordPress plugin closes that gap. With it installed, every request that crosses the threshold emits:
event_type=memory_near_limit
peak_mb=242.0 limit_mb=256 ratio=0.945
request_uri=/wp-admin/admin-ajax.php
action=woocommerce_update_order_review
active_plugins_hash=8f41c2…
That is the signal you actually want. It fires on the request before the one that crashes, and it carries the URI, the action, and the plugin set. To filter to just the worst offenders in Logystera (or grep on a raw stream):
grep '"event_type":"memory_near_limit"' /var/log/logystera/wp-signals.log \
| jq -r 'select(.payload.ratio > 0.9) | "\(.payload.peak_mb)MB \(.payload.request_uri)"' \
| sort | uniq -c | sort -rn | head
If the same URI shows up repeatedly with ratio > 0.9, that URI is going to start producing php.fatal "Allowed memory size exhausted" within hours or days.
Diagnosis-to-signal mapping for this section:
grep "Allowed memory size" /var/log/php-fpm/error.log→ producesphp.fatal(memory exhaustion subtype)grep ' 500 ' /var/log/nginx/access.log→ URIs that crashed; same URIs appear in upstreammemory_near_limit- Plugin signal stream filtered on
event_type=memory_near_limit→ producesmemory_near_limit, the leading indicator wp.environmentsnapshot mismatch (memory_limit_mb != wp_memory_limit_mb) → explains why a previously-fine workload now spikes
5.2 Root Causes
(see root causes inline in 5.3 Fix)
5.3 Fix
Fixes prioritized by frequency in production. Each cause maps back to which signal it produces.
1. A specific plugin or workflow leaks memory on a specific URI. This is the most common cause, by a wide margin. A page builder rendering a complex layout, a security plugin scanning on every admin request, or a translation plugin walking a large term tree.
- Signal trail:
memory_near_limitclustering on onerequest_uri,php.fataleventually firing on the same URI.active_plugins_hashstable across the cluster. - Fix: identify the URI from the signal, disable plugins one by one starting with the most recently updated, retest. Confirm with
wp plugin list --status=active --format=csv | wc -lbefore and after.
2. memory_limit is genuinely too low for the workload. WooCommerce stores, multilingual sites, and sites with heavy import/export plugins routinely need 384–512 MB.
- Signal trail:
memory_near_limitdistributed across many URIs at moderate ratio (0.8–0.9), occasionalphp.fatalon the heaviest endpoints,wp.environmentshowsmemory_limit_mb=256or lower. - Fix: raise PHP
memory_limitinphp.inior the FPM pool config (php_admin_value[memory_limit] = 512M), reload PHP-FPM, and confirm viawp eval 'echo ini_get("memory_limit");'. Then raiseWP_MEMORY_LIMITinwp-config.phpto match.
3. wp-config.php requests more than the host allows. define('WP_MEMORY_LIMIT', '512M'); is silently capped at the host's PHP memory_limit.
- Signal trail:
wp.environmentshowswp_memory_limit_mb=512butmemory_limit_mb=256.memory_near_limitfires under load even though wp-config "looks fine". - Fix: confirm with
php -i | grep memory_limitfrom the CLI on the actual web server. If you cannot raise the host PHP value, the wp-config setting is fiction.
4. A runaway query loads too many rows. WP_Query with posts_per_page => -1, an unbounded get_posts() inside a loop, or a custom report query.
- Signal trail:
memory_near_limitwithpeak_mb2–5x the typical request, isolated to specific report or export URIs. Often correlates with a recentwp.state_change(plugin update, custom code change). - Fix: paginate the query, add
'fields' => 'ids'where possible, or move the operation to WP-CLI / a Sidekiq-style background job.
5. PHP-FPM pm.max_children is too high for available RAM. Memory exhaustion on the server (not the request) presents as OOM-killed workers, not "Allowed memory size".
- Signal trail: no
memory_near_limit, no PHPphp.fatal. Instead,dmesg | grep -i "killed process"shows OOM kills. - Fix: lower
pm.max_childrensomax_children memory_limit < available_RAM 0.8.
If you cannot tell which cause applies, work top-down: cluster memory_near_limit by request_uri. If it concentrates on one URI, you are in case 1 or 4. If it is broadly distributed, you are in case 2 or 3.
5.4 Verify
There are two signals that must change after the fix lands. Watch both.
memory_near_limit should drop below threshold. After the change, no request should emit memory_near_limit with ratio > 0.8 for at least 30 minutes of normal traffic, including admin and any cron windows. In a raw log:
tail -F /var/log/logystera/wp-signals.log \
| grep --line-buffered '"event_type":"memory_near_limit"'
If the stream is silent for 30 minutes during normal load, the fix is holding. If memory_near_limit is still firing but at lower ratio (e.g. dropped from 0.95 to 0.82), you reduced pressure but did not eliminate it — keep going.
php.fatal "Allowed memory size exhausted" should stop entirely.
grep "Allowed memory size" /var/log/php-fpm/error.log \
| awk '$1 >= "'$(date -d '1 hour ago' +%Y-%m-%d)'"' \
| wc -l
This should be 0 for at least one hour after the fix. Be careful: if the failing URI is rarely hit (e.g. a weekly cron, a monthly report), one hour is not enough — you need to wait until that workload runs again before declaring the fix verified.
wp.environment should report consistent values. After raising the limit, the next hourly wp.environment signal should show the new memory_limit_mb matching wp_memory_limit_mb. If they still disagree, the host did not actually accept the change.
Healthy looks like: zero memory_near_limit events at ratio > 0.8, zero php.fatal with memory exhaustion message, and wp.environment.memory_limit_mb == wp.environment.wp_memory_limit_mb. Three signals, all quiet, for at least one full traffic cycle.
6. How to Catch This Early
Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.
This issue surfaces as memory_near_limit.
The hard truth: nothing about default WordPress will tell you a request used 92% of its memory budget. The crash is the alarm. By then, users are seeing 500s.
This is the gap memory_near_limit fills. It is a signal that exists specifically because PHP fatal errors are too late to be useful. Logystera detects memory_near_limit per request, clusters by URI and plugin set, and fires an alert when the cluster becomes a trend — typically days before the first php.fatal. The same pipeline tracks wp.environment drift, so a host capping memory_limit below your wp-config.php value surfaces as a configuration alert rather than as Friday afternoon checkout failures.
You do not need to read logs by hand. You need a system that watches the leading indicator (memory_near_limit), the trailing indicator (php.fatal), and the configuration baseline (wp.environment), and tells you when any of them moves. That is what Logystera does. Once it is in place, "the site crashed and we don't know why" stops being the way you find out about memory problems.
7. Related Silent Failures
Memory exhaustion is part of a broader cluster of WordPress crashes that share the same detection gap — they happen inside a request, the worker recycles, and nothing surfaces in the dashboard.
- WordPress 500 error after plugin update —
wp.state_changefollowed by aphp.fatalcluster. Same diagnostic shape; different root cause. - WSOD (white screen of death) —
php.fatalwithdisplay_errors=Off. The user sees nothing; the log sees everything. - wp-cron missed schedules —
wp.cron type=missed_scheduleand absence ofcron.run. Often downstream of memory crashes that kill the cron worker. - Plugin auto-update breakage —
wp.state_changewithaction=plugin_updatedfollowed byphp.fatal. The fatal is a symptom; the update is the cause. - REST API 500s under load —
http.request status=500clustering on/wp-json/*paths, frequently produced by the samememory_near_limitmechanism on heavy endpoints.
The pattern across all of them: WordPress hides the failure, logs reveal it, and a signal-driven alert tells you before the first user complaint.
See what's actually happening in your WordPress system
Connect your site. Logystera starts monitoring within minutes.