Guide

Drupal allowed memory size exhausted — finding the route or module responsible

Your Drupal site shows a half-rendered admin page, a blank node edit form, or this exact line in the page source: Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 32768 bytes) in /var/www/drupal/web/core/lib/.

1. Problem

Your Drupal site shows a half-rendered admin page, a blank node edit form, or this exact line in the page source:

Fatal error: Allowed memory size of 268435456 bytes exhausted
(tried to allocate 32768 bytes) in /var/www/drupal/web/core/lib/Drupal/...

Sometimes it is more confusing: homepage and most node pages render fine, but /admin/content, /admin/config/development/performance, or a specific Views listing 500s. JSON:API returns 500 only on POST. An editor saves a long paragraphs-based node, sees "The website encountered an unexpected error. Please try again later", and assumes the network glitched.

You searched "drupal allowed memory size exhausted which module", "drupal php memory limit increase", or "drupal out of memory specific page". You are in the right place. By the time PHP prints "Allowed memory size exhausted", the request is already dead — the crash is the late symptom. Real detection happens earlier, on the request that allocated 92% of memory_limit and survived. This surfaces first as a memory_near_limit signal with a path label naming the offending route. Logs see it. Drupal's status report does not.

2. Impact

A single memory exhaustion takes the request down. Recurring ones brick whole workflows.

  • Editor productivity collapses. A paragraphs-heavy node hits the limit during node_save. Editor sees "unexpected error", reloads, autosave is gone — a 30-minute edit lost. Multiply by every editor on the team.
  • Migrations corrupt mid-batch. drush migrate:import runs through 4,000 of 10,000 rows, exhausts memory inside a process plugin, leaves the destination half-populated.
  • Cron stalls cascade. The cron runner crashes inside search_api_cron, the lock releases, the next cron run hits the same code path and crashes again. Search indexes drift, scheduled publishing fails, queues pile up.
  • Commerce checkout fails on large carts. Drupal Commerce hits the limit recalculating promotions for carts with 20+ line items. Orders 500 silently.
  • Backups stop completing. Backup and Migrate or drush sql-dump consumes 300–500 MB on large sites. A 256 MB pool means your last full backup is older than the timestamp claims.

Every memory-exhausted request is a 500. Every 500 on /checkout is lost revenue. Every 500 on an editor route is recurring lost time. PHP-FPM recycles the worker after a fatal, so the next request on that worker is fine — which is exactly why it feels random.

3. Why It’s Hard to Spot

Drupal hides this failure mode for three reasons.

First, the PHP-FPM worker dies and is replaced. The fatal lives in php-fpm.log for one line, often truncated, with no Drupal context — no route name, no user, no module. The next request on the same pool gets a fresh worker. There is no admin notice, no watchdog entry (Drupal cannot log what kills its own bootstrap), no Site Status warning.

Second, uptime monitors cannot see it. External pingers hit /. The homepage is cached by the Internal Page Cache or a CDN and almost never the route that exhausts memory. The crashing route is /node/4831/edit, /admin/content?type=article, /jsonapi/node/article?include=field_paragraphs.field_media, or /cron/. From outside, the site is "up" 100% of the time while editors cannot save and cron has not run for a day.

Third, Drupal rewrites the symptom. A fatal during render produces "The website encountered an unexpected error. Please try again later" — the generic exception handler since Drupal 8. The actual error is in PHP-FPM logs. Site owner sees "unexpected error", refreshes, gets a working page on a fresh worker, assumes it self-healed.

The result: memory exhaustion is one of the most common Drupal crashes, and one of the least visible. The only place it leaves a clean trail is the logs.

4. Cause

PHP allocates memory inside a single request up to ini_get('memory_limit'). Drupal layers configuration on top: settings.php can ini_set('memory_limit', '512M'), and the FPM pool can pin php_admin_value[memory_limit]. When usage approaches the ceiling, nothing warns you. PHP keeps allocating until the next emalloc() cannot satisfy the request, raises an E_ERROR, and the worker dies.

Logystera treats the approach to that ceiling as its own event. The Logystera Drupal module samples memory_get_peak_usage(true) at terminate-kernel-event time and emits a memory_near_limit signal whenever peak usage crosses a threshold (default 80% of effective memory_limit). The signal carries:

  • peak_mb, limit_mb, ratio — peak resident usage, effective limit, and their ratio
  • path — the route or URL path that ran (/admin/content, /node/4831/edit, /jsonapi/node/article)
  • route_name — symbolic route name (entity.node.canonical, view.products.page_1)
  • is_admin, is_ajax, is_cron, enabled_modules_hash

memory_near_limit is the leading indicator. The downstream drupal_php_error_total (type E_ERROR, message starting Allowed memory size) is the trailing indicator.

The configured rule (Definition::Rule id 433) fires when 5 memory_near_limit events on the same entity occur within a 5-minute window. That catches a building cluster on a single hot route before it tips over into a wave of 500s.

The mechanism, exactly: memory_near_limit rising on a specific path → cluster crosses 5/5min → rule fires → if not addressed, that path produces drupal_php_error_total "Allowed memory size exhausted" → 500 to the user.

5. Solution

5.1 Diagnose (logs first)

Three log surfaces matter. Read them in this order.

a) PHP-FPM error log. This is where the actual fatal lands. Common paths: /var/log/php-fpm/error.log, /var/log/php8.3-fpm.log.

Find every memory exhaustion since a deploy window:

# All memory fatals since 14:00 (e.g. immediately after a contrib module update at 14:00)
grep -E "Allowed memory size of [0-9]+ bytes exhausted" /var/log/php-fpm/error.log \
  | awk '$2 >= "14:00:00"'

Each match is a confirmed drupal_php_error_total increment with type=E_ERROR and message starting Allowed memory size. The line tells you the limit (268435456 bytes = 256 MB) and the file where the last allocation was attempted — usually deep inside core/lib/Drupal/Core/Entity/ or a contrib module.

b) Web server access log. Memory fatals manifest as 500s. Group by URL and correlate with the fatal timestamps:

# Top crashing URLs in the last hour
grep ' 500 ' /var/log/nginx/access.log | awk '{print $7}' | sort | uniq -c | sort -rn | head -20

# Cross-reference a fatal timestamp to the request that died
grep "14:23:0[6-9]" /var/log/nginx/access.log

If /admin/content is on the list, your editor backend is dying. If /cron/ is on it, scheduled tasks are crashing. These URLs are exactly what the memory_near_limit signal carries in its path label upstream.

c) Application-side, before the fatal. Most memory crashes are not sudden. The same /admin/content request uses 220 MB on Monday and 270 MB on Friday after a contrib update. Standard PHP logs cannot see that — the request succeeded, nothing was logged.

The Logystera Drupal module closes that gap. With it installed, every request that crosses the threshold emits:

event_type=memory_near_limit
peak_mb=243.0 limit_mb=256 ratio=0.949
path=/admin/content
route_name=system.admin_content
is_admin=true
enabled_modules_hash=8f41c2…

That is the leading indicator. To filter to the worst offenders against a raw signal stream:

grep '"event_type":"memory_near_limit"' /var/log/logystera/drupal-signals.log \
  | jq -r 'select(.payload.ratio > 0.9) | "\(.payload.peak_mb)MB \(.payload.path) \(.payload.route_name)"' \
  | sort | uniq -c | sort -rn | head

If the same path shows up repeatedly with ratio > 0.9, that route is going to start producing 500s within hours. The path label is the diagnostic key — it is the difference between "memory is tight somewhere" and "the /admin/content?type=article&status=2 listing is going to crash next".

For deeper inspection, reproduce the path under Drush with explicit memory tracking:

drush ev '$req = \Symfony\Component\HttpFoundation\Request::create("/admin/content"); \Drupal::service("http_kernel")->handle($req); echo round(memory_get_peak_usage(true)/1024/1024, 1) . "MB\n";'

Or enable Devel's memory profiling for one request to see the call tree.

Diagnosis-to-signal mapping:

  • grep "Allowed memory size" /var/log/php-fpm/error.log → produces drupal_php_error_total with type=E_ERROR (memory exhaustion subtype)
  • grep ' 500 ' /var/log/nginx/access.log → URLs that crashed; same URLs appear as path in upstream memory_near_limit
  • Module signal stream filtered on event_type=memory_near_limit → produces memory_near_limit, the leading indicator with the route attribution
  • drupal_page_requests_total and drupal_ajax_requests_total baselined by path → reveal which routes are hot enough to matter

5.2 Root Causes

(see root causes inline in 5.3 Fix)

5.3 Fix

Fixes prioritized by frequency in production. Each cause maps back to which signal it produces.

1. A specific module or Views listing leaks memory on a specific path. Most common cause. A view with too many fields, a paragraphs-heavy node form, a module that walks every taxonomy term on every admin request.

  • Signal trail: memory_near_limit clustering on one path (e.g. /admin/content), drupal_php_error_total eventually firing on the same path. enabled_modules_hash stable across the cluster.
  • Fix: identify the path from the signal label, disable the most recently updated contrib module that hooks the route (drush pmu module_name), retest. For a Views page, edit the view and remove unnecessary relationships, fields, and COUNT pagers.

2. memory_limit is genuinely too low for the workload. Commerce stores, multilingual sites, and Migrate-heavy workflows often need 512–1024 MB.

  • Signal trail: memory_near_limit distributed across many paths at moderate ratio (0.8–0.9), occasional fatals on the heaviest endpoints, configuration shows memory_limit=256M.
  • Fix: raise memory_limit in the FPM pool config (php_admin_value[memory_limit] = 512M), reload PHP-FPM, confirm via drush php:eval 'echo ini_get("memory_limit");'. If settings.php calls ini_set('memory_limit', ...), raise that too.

3. settings.php requests more than the FPM pool allows. ini_set('memory_limit', '1G') is silently capped at the pool's php_admin_value[memory_limit] because php_admin_value is non-overridable.

  • Signal trail: memory_near_limit fires under load even though settings.php "looks fine". drush php:eval 'echo ini_get("memory_limit");' returns 256M while settings.php says 1G.
  • Fix: confirm the pool's effective ceiling with php-fpm -tt 2>&1 | grep memory_limit. If the pool caps at 256M, your settings.php line is fiction — raise the pool or remove the misleading ini_set.

4. A runaway EntityQuery or loadMultiple loads too many entities. A custom block iterating Node::loadMultiple() without bounds, a Views display with Items per page: 0 (unlimited), a Migrate process plugin that hydrates the full referenced entity tree.

  • Signal trail: memory_near_limit with peak_mb 2–5x the typical request, isolated to specific report, listing, or migration paths. Often correlates with a recent config import or module enable.
  • Fix: paginate the query with range(), switch to entity IDs first then load in chunks, or move the operation to drush php:script outside the web pool. For Views, set a hard Items per page.

5. PHP-FPM pm.max_children is too high for available RAM. Memory exhaustion at the server level (not per-request) presents as OOM-killed workers, not "Allowed memory size".

  • Signal trail: no memory_near_limit, no fatal in the PHP log. Instead, dmesg | grep -i "killed process" shows OOM kills and nginx returns 502 Bad Gateway clusters.
  • Fix: lower pm.max_children so max_children memory_limit < available_RAM 0.7. Verify with ps -ylC php-fpm8.3 --sort:rss.

If you cannot tell which cause applies, cluster memory_near_limit by path. Concentrates on one path → case 1 or 4. Broadly distributed → case 2 or 3. No memory_near_limit at all but workers dying → case 5.

For load-profiling a suspected hot path before deploying, run it through BlazeMeter or JMeter with realistic concurrency and watch peak_mb in the signal stream.

5.4 Verify

Two signals must change after the fix lands. Watch both.

memory_near_limit should drop below threshold for the fixed path. After the change, no request to that path should emit memory_near_limit with ratio > 0.8 for at least 30 minutes of normal traffic. Tail the raw signal stream:

tail -F /var/log/logystera/drupal-signals.log | grep --line-buffered '"event_type":"memory_near_limit"' | jq -r 'select(.payload.path == "/admin/content")'

If silent for that path for 30 minutes under normal load, the fix is holding. If memory_near_limit is still firing at lower ratio (e.g. 0.95 → 0.82), you reduced pressure but did not eliminate it — keep going.

drupal_php_error_total for memory-exhaustion fatals should stop entirely. Baseline matters: a healthy Drupal site sees zero memory-exhaustion fatals per hour, though 1–3 unrelated E_NOTICE/E_WARNING increments per hour are normal noise. Filter for the specific message:

grep "Allowed memory size" /var/log/php-fpm/error.log | awk -v cutoff="$(date -d '1 hour ago' '+%Y-%m-%d %H:%M')" '$1" "$2 >= cutoff' | wc -l

This should be 0 for at least one hour after the fix. If the failing path is rarely hit (weekly cron, monthly export), one hour is not enough — wait for that workload to run before declaring the fix verified.

Healthy: zero memory_near_limit events at ratio > 0.8 on the previously-hot path, zero fatal entries with "Allowed memory size", rule 433 (memory_near_limit ≥ 5 in 5min) sitting at 0/5. Three signals quiet, for one full traffic cycle.

6. How to Catch This Early

Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.

This issue surfaces as memory_near_limit.

Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.

This issue surfaces as memory_near_limit. Everything you just did manually — grep PHP-FPM logs, correlate timestamps with the nginx access log, attribute the failing URL back to a Drupal route — Logystera does automatically. The same memory_near_limit signal, with its path label naming the hot route, is detected, charted, and alerted in real time.

The configured rule (Definition::Rule id 433) fires when 5 memory_near_limit events on the same entity occur within 5 minutes. That catches a building cluster on a single route — the moment three or four editors hit the same paragraphs-heavy node form on the same morning — and alerts with path, route name, peak memory, and effective limit baked in. The trailing drupal_php_error_total increment confirms the cluster. Supporting drupal_page_requests_total and drupal_ajax_requests_total give the route's traffic context, so you can tell "crashes once a week, gets one hit a week" from "crashes once a week, gets ten thousand hits an hour".

You do not need to read logs by hand. You need a system that watches the leading indicator (memory_near_limit with path), the trailing indicator (drupal_php_error_total), and the route's traffic baseline, and tells you when any of them moves. That is what Logystera does.

7. Related Silent Failures

Memory exhaustion shares a detection gap with other Drupal failures that happen inside a request — the worker recycles, nothing surfaces in the admin UI.

  • Drupal slow queries on Viewsdb.slow_query and perf.hook_timing on hook_views_pre_execute. The same Views displays often produce both signals because they hydrate too many entities.
  • Drupal queue workers stuck — queue worker timeouts and drupal_php_error_total from cron. Memory crashes inside drush queue:run cause silent backlog.
  • Drupal config import failedwatchdog warnings during deployment, often paired with memory_near_limit when the import touches large config entities.
  • Drupal cache flush frequency — repeated cache.bin invalidations correlated with memory_near_limit spikes, because the next request rebuilds caches with no headroom.
  • Drupal module installed — module enable events immediately followed by memory_near_limit clustering on routes the new module hooks.

Each points at the same truth: Drupal hides per-request failures. The logs do not.

See what's actually happening in your Drupal system

Connect your site. Logystera starts monitoring within minutes.

Logystera Logystera
Monitoring for WordPress and Drupal sites. Install a plugin or module to catch silent failures — cron stalls, failed emails, login attacks, PHP errors — before users report them.
Company
Copyright © 2026 Logystera. All rights reserved.