Guide

WordPress memory pressure on specific routes — finding the page or hook that eats memory

Last week your WordPress site was dying with Fatal error: Allowed memory size of 268435456 bytes exhausted. You doubled memorylimit to 512M in php.ini, reloaded PHP-FPM, and the white screens stopped. The team moved on.

1. Problem

Last week your WordPress site was dying with Fatal error: Allowed memory size of 268435456 bytes exhausted. You doubled memory_limit to 512M in php.ini, reloaded PHP-FPM, and the white screens stopped. The team moved on.

Today the same fatal shows up again — but now at 512M, on a single page. The homepage is fine. /wp-admin/edit.php is fine. But /shop/category/wholesale/ or /wp-json/wp/v2/posts?per_page=100 or the daily report export at /wp-admin/admin.php?page=acme-reports&action=export peaks at 487MB and either fatals or returns a 500. Your search history reads "wordpress increased memory_limit still fatal which page", "wordpress memory leak specific page", "wordpress one route eats memory others fine". Every result tells you to raise memory_limit again. You already did. The page still eats it.

This is the failure where one route, hook, or REST endpoint behaves like every other route in normal monitoring — same plugins loaded, same theme, same database — and yet allocates 4-8x more memory than its peers. The site-wide memory_near_limit signal goes quiet after you raise the limit, but wp_memory_near_limit_total{route="..."} keeps incrementing on a tiny number of paths. The leak is route-shaped, not site-shaped, and that is why raising memory_limit only buys you time.

2. Impact

A site-wide memory exhaustion is loud — every other request 500s and someone files a ticket within the hour. A route-shaped one is quiet, expensive, and slow to find:

  • The export job runs nightly and silently truncates. The accounting CSV at /wp-admin/admin.php?page=acme-export peaks at 480MB on a 512MB limit, fatals on row 47,000 of 90,000, and writes a partial file. Finance reconciles against incomplete data for three weeks before someone notices.
  • One product category page returns 500 to logged-out users only. /shop/category/wholesale/ loads 2,400 variations into memory via WP_Query with posts_per_page=-1. Logged-in users hit the object cache. Logged-out customers hit MySQL, build the full result set, and 500. Conversions on that category drop to zero and the dashboard shows "traffic is fine".
  • The REST endpoint the mobile app uses fatals once per minute. /wp-json/wp/v2/posts?per_page=100&_embed instantiates 100 post objects with full meta, author, and term joins. The mobile app retries silently. Users see "loading..." spinning forever.
  • A single plugin's settings page exhausts memory. A misconfigured plugin loads its entire 6MB option blob via get_option('plugin_settings') on every admin page render. Editors hit the limit when they open Tools > Plugin Name and lose their session.
  • Image regeneration locks one upload. wp media regenerate peaks at 450MB on a 12000x8000px PNG, fatals, and the next worker retries the same image. The whole queue stalls behind one file.

The financial story: each of these is a 500 on a specific route. The same route. Every time. Revenue, productivity, and data integrity bleed from a single URL while the rest of the site looks healthy.

3. Why It’s Hard to Spot

Site-wide memory monitoring averages the problem away. If 99% of your requests peak at 80MB and 0.5% peak at 480MB, your average peak is 82MB. Your p95 is 120MB. Your p99 is maybe 380MB — but who looks at p99 on memory? The dashboard says "memory is fine" because the bad route is too rare to move the average.

WordPress itself surfaces nothing per-route. memory_get_peak_usage() is a per-request value the framework discards at shutdown. Site Health shows the configured limit, not actual usage. The WP_DEBUG_LOG only fires on the fatal — by then the request is dead and the URL is already in the nginx access log as a 500, not in any application context.

APM tools that bin by transaction will eventually surface the route, but only if you have one installed and tagged correctly. Most WordPress sites do not. Hosting dashboards graph aggregate PHP-FPM memory, which is the worker pool number — meaningless for a per-request leak that happens once every 200 requests.

The deepest confusion is the after-effect of raising memory_limit. Once you go from 256M to 512M, the fatal stops appearing in PHP error logs for most routes. The php.fatal "Allowed memory size exhausted" entries thin out. Site Health turns green. But the underlying leak is unchanged — you have only widened the bucket. The single bad route still allocates 480MB; it just no longer crashes. Until traffic doubles, or the dataset grows, or one more plugin adds 30MB to the request, and you are back at the same fatal — at a higher number.

4. Cause

The Logystera WordPress plugin samples memory_get_peak_usage(true) at shutdown on every request. When peak usage crosses 80% of the effective memory_limit, the plugin emits a memory_near_limit signal carrying peak_mb, limit_mb, ratio, the matched route (normalized — e.g. /shop/category/{slug}/, /wp-json/wp/v2/posts, /wp-admin/admin-ajax.php?action={action}), and the request_id.

The processor matches that signal against the wp_memory_near_limit_total counter metric, which is bucketed by entity_id, route, and status. The metric is a Prometheus counter — every near-limit event increments it under its route label. The counter is the only place a route-shaped memory leak becomes visible: the absolute value across the fleet may be small (hundreds per day), but distribution across routes is wildly skewed. One route accounts for 90%+ of increments while every other route sits near zero.

wp_memory_peak_percent is a histogram of peak_mb / limit_mb per request, also labeled by route. It tells you not just "this route went near the limit" but "this route hits 92% on every render", which is the actual leak signature. wp_request_peak_memory_mb carries the absolute MB number per request, which is what you cite when you ask hosting to raise the limit. wp_hook_timing_ms is the per-callback timing breakdown — it does not measure memory, but the memory hog is almost always the slowest hook on the route, because allocating 400MB of objects takes time.

The mechanism, end to end: bad route fires memory_near_limitwp_memory_near_limit_total{route="/shop/category/{slug}"} increments → wp_memory_peak_percent shows the route's p95 ratio at 0.92+ while every other route sits at 0.30 → the slowest hook on that route in wp_hook_timing_ms is the allocator.

5. Solution

5.1 Diagnose — find the bad route in the logs

Start with the route distribution of near-limit events. The plugin writes signal envelopes to a local log; on a hosted setup, the same data lives in the entity's signal stream.

# Group memory_near_limit events by normalized route, last 24h
grep '"event_type":"memory_near_limit"' /var/log/logystera/wp-signals.log \
  | jq -r '.payload.route' \
  | sort | uniq -c | sort -rn | head -20

Each line above corresponds to increments of wp_memory_near_limit_total bucketed by route. A healthy site shows a flat-ish distribution across many routes (or no rows at all). A route-shaped leak shows one or two routes at 100x the rest:

   847 /shop/category/{slug}/
    12 /wp-json/wp/v2/posts
     4 /wp-admin/admin-ajax.php?action=heartbeat
     2 /

That 847 is your suspect. Now correlate it with a real-world event — when did the spike start?

# Count near-limit events per hour for the suspect route
grep '"event_type":"memory_near_limit"' /var/log/logystera/wp-signals.log \
  | jq -r 'select(.payload.route == "/shop/category/{slug}/") | .timestamp[0:13]' \
  | sort | uniq -c

If the count was zero before 2026-04-21T14:00 and 50/hour after, cross-check against wp.state_change:

# Did anything change at 14:00 on the 21st?
grep '"event_type":"wp.state_change"' /var/log/logystera/wp-signals.log \
  | jq 'select(.timestamp >= "2026-04-21T13:30" and .timestamp <= "2026-04-21T14:30")'

A plugin update, a theme deploy, or a WP_DEBUG flip that lined up with the spike is the trigger. If nothing changed in code, look for data growth: a category that crossed 1,000 products, a custom post type that crossed 50,000 rows, or a meta key that started storing serialized arrays.

Now confirm the route is hitting the ratio ceiling, not just appearing occasionally — pull the wp_memory_peak_percent histogram values for the suspect:

# Peak percent values for the suspect route
grep '"event_type":"memory_near_limit"' /var/log/logystera/wp-signals.log \
  | jq -r 'select(.payload.route == "/shop/category/{slug}/") | .payload.ratio' \
  | sort -n | tail -10

If every value is 0.88-0.95, the route always runs near the ceiling — the leak is structural, not random. That is the wp_memory_peak_percent signature of a route-level memory eater.

5.2 Root causes

Each below is mapped to which signal it produces and how it appears in logs.

1. WP_Query with posts_per_page=-1 or extremely large page sizes. A loop fetches every product, post, or term into memory at once. Common on category pages, sitemaps, and "all" exports.

  • Signal trail: wp_memory_near_limit_total clustered on one or two route labels (e.g. /shop/category/{slug}/, /sitemap.xml). wp_memory_peak_percent p95 > 0.85 on that route. wp_request_peak_memory_mb 4-8x site median. wp_hook_timing_ms shows pre_get_posts or template-rendering hooks dominating.

2. Image processing on oversized uploads. A media regeneration, on-the-fly thumbnail, or wp_get_image_editor call decodes a 30+ megapixel image into memory.

  • Signal trail: wp_memory_near_limit_total{route="/wp-admin/admin-ajax.php?action=image-editor"} or {route="/wp-cron.php"}. wp_request_peak_memory_mb 300-500MB. wp_hook_timing_ms shows wp_generate_attachment_metadata or image_resize callbacks consuming 80% of request time.

3. A plugin loading its entire option blob on the route. get_option('plugin_settings') returns a 6MB serialized array; unserialize allocates 30MB+ of PHP objects. Worse: the option is autoload=yes, so it loads on every request, but only specific hooks deserialize and walk the structure.

  • Signal trail: wp_memory_near_limit_total{route="/wp-admin/admin.php?page={plugin}"}. wp_memory_peak_percent ratio jumps from 0.4 site-wide to 0.9 on the plugin's admin pages. wp_hook_timing_ms shows admin_init or {plugin}_load_settings as the slowest callback.

4. REST endpoint returning all posts with _embed. /wp-json/wp/v2/posts?per_page=100&_embed instantiates 100 post objects, plus author, plus terms, plus featured media — easily 200-400MB.

  • Signal trail: wp_memory_near_limit_total{route="/wp-json/wp/v2/posts"} rising in lockstep with mobile app or headless frontend traffic. wp_request_peak_memory_mb p95 > 350MB. wp_hook_timing_ms dominated by rest_post_dispatch and wp_prepare_post_for_response.

5. Custom hook callback that builds an in-memory cache per request. A plugin or theme function loads every term, every menu item, or every user into a static array on init, intending to "speed up" subsequent calls in the request.

  • Signal trail: wp_memory_near_limit_total distributed across many routes, but ratio jumps the moment the callback's plugin is active. wp_memory_peak_percent shifts up site-wide after a deploy. wp_hook_timing_ms shows the callback at the top of init or wp_loaded.

5.3 Fix — prioritized

  1. For oversized WP_Query: replace posts_per_page=-1 with paged queries (posts_per_page=50 + offset, or WP_Query::found_posts for total). For category pages, set paged and add server-side pagination. For sitemaps, generate them offline via WP-CLI cron.
  1. For image processing: enforce upload size limits in nginx (client_max_body_size) and PHP (upload_max_filesize). For existing oversized media, regenerate offline via wp media regenerate --only-missing --batch=10. Consider moving to an external image service (Cloudinary, ImgIX) for large originals.
  1. For autoloaded option blobs: wp option get plugin_settings | wc -c to confirm size; if > 1MB, set autoload=no (wp option set plugin_settings "$(wp option get plugin_settings)" --autoload=no) and load the option only on the routes that need it.
  1. For REST _embed: cap per_page server-side via rest_endpoints filter to 25, and require explicit _fields to limit the response shape. Long term: move bulk reads to a custom endpoint that streams.
  1. For static in-memory caches in callbacks: replace with wp_cache_get/wp_cache_set against a real persistent object cache (Redis, Memcached). The plugin author's "performance optimization" is the leak — don't keep it.

After each fix, redeploy and watch the route's wp_memory_near_limit_total count for the next 30 minutes.

5.4 Verify

After the fix, expect wp_memory_near_limit_total{route=""} to stop incrementing within 30 minutes under normal traffic. Concretely:

# Confirm increments have stopped on the previously bad route
tail -F /var/log/logystera/wp-signals.log \
  | grep '"event_type":"memory_near_limit"' \
  | jq 'select(.payload.route == "/shop/category/{slug}/")'

Healthy baseline: a typical WordPress site emits 0-10 memory_near_limit events per hour across all routes combined, distributed across many paths, with wp_memory_peak_percent p95 around 0.40-0.55. If you still see > 20/hour on the same route, or wp_memory_peak_percent for that route stuck above 0.80, the underlying cause isn't resolved — go back to §5.2.

Generic "the page loads now" verification is not enough. The page may load because the limit is high enough this hour; the leak may still be there waiting for the next traffic spike or dataset growth. The signal disappearance is the proof.

6. How to Catch This Early

Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.

This issue surfaces as wp_memory_near_limit_total.

Everything you just did manually — grep route distribution from signal logs, count increments per hour, correlate with a deploy, eyeball wp_memory_peak_percent ratios per route — Logystera does automatically. The same wp_memory_near_limit_total you just searched for is detected, charted by route label, and alerted in real time.

!Logystera dashboard — wp_memory_near_limit_total over time

wp_memory_near_limit_total per route, last 24h: one route (/shop/category/{slug}/) accounts for 90% of increments after the 14:03 plugin update, every other route flat.

!Logystera alert — Memory pressure concentrated on single route

Critical alert fires within 60s when one route's wp_memory_near_limit_total exceeds 10x the site median, with the offending route, peak MB, and the triggering deploy timestamp in the evidence section.

The fix is simple once you know the problem is route-shaped, not site-shaped. The hard part is telling the difference. A site-wide average says memory is fine; a per-route counter says one URL is on fire. Logystera turns this kind of failure from a quarterly "why did the export break again" investigation into a 60-second notification that names the route, the peak MB, the percent of limit, and the deploy that introduced it.

7. Related Silent Failures

  • WordPress memory_limit detect-before-crash — site-wide memory_near_limit before the route-level breakdown matters; raise this when the average is climbing, not just one route.
  • WordPress slow page load — high TTFB when queries look finewp_slow_requests_total and wp_hook_timing_ms overlap heavily with route-level memory leaks; the slow route is often also the heavy route.
  • WordPress 500 error after plugin updatephp.fatal "Allowed memory size exhausted" is the trailing edge of wp_memory_near_limit_total on a single route; the route label tells you which plugin path is responsible.
  • WordPress scheduled posts not publishingwp-cron.php is a frequent route in wp_memory_near_limit_total when a scheduled task allocates large datasets; missed_schedule is downstream of the same leak.
  • WordPress environment driftwp.environment mismatches (memory_limit_mb vs wp_memory_limit_mb) explain why a previously-fine route suddenly trips wp_memory_near_limit_total after a host config change.

See what's actually happening in your WordPress system

Connect your site. Logystera starts monitoring within minutes.

Logystera Logystera
Monitoring for WordPress and Drupal sites. Install a plugin or module to catch silent failures — cron stalls, failed emails, login attacks, PHP errors — before users report them.
Company
Copyright © 2026 Logystera. All rights reserved.