Guide

Logystera WordPress plugin dropping events under load — what buffer drops mean and how to fix

You opened the Logystera dashboard for one of your WordPress entities and noticed something off. The request count is suspiciously round.

1. Problem

You opened the Logystera dashboard for one of your WordPress entities and noticed something off. The request count is suspiciously round. A spike that you remember happening — a Black Friday surge, a campaign push, a brute-force burst against /wp-login.php — does not show up at the resolution you expect. Or the dashboard is fine, but a banner near the entity status reads:

This entity dropped 2,418 events in the last 7 days.
Some monitoring data is incomplete.

You searched "logystera wordpress plugin missing events", "wordpress monitoring gaps high traffic", or "logystera buffer dropping events". You are in the right place. This is not a bug in the dashboard, and it is not a gateway outage. It is the WordPress plugin telling you, on purpose, that its local buffer could not ship every audit event it captured before the buffer filled up.

Across our production fleet right now, 5 out of 5 monitored WordPress entities have logged a combined 10,489 dropped events in the last 7 days. It is a load-shedding mechanism, not a failure — but every dropped event is a hole in your audit trail.

2. Impact

A WordPress site can produce thousands of audit-relevant events an hour: every authenticated request, every REST hit, every option write, every login attempt, every cron tick. The Logystera plugin sees all of them. The gateway, sitting on the other side of an HTTPS POST, can only accept what the plugin manages to send. When the buffer fills faster than the plugin can drain it, the plugin chooses to drop the oldest pending events rather than fail the WordPress request the user is on.

That trade-off is correct. But the consequences land directly on the things you bought Logystera for:

  • Security gaps. A credential-stuffing burst can fire 800 auth.attempt events in 60 seconds. If half drop, your detection rule sees 400 and may not cross threshold. The attack proceeds.
  • Compliance gaps. SOC 2 and PCI-DSS auditors expect a complete record of admin actions. "We dropped some" is not a defensible answer when the question is whether a privileged user touched a specific row.
  • Misleading dashboards. Request rate, error rate, and 5xx percentage all look better than reality when the denominator is silently truncated. Trends you steer the business by are wrong.
  • Lost correlation. A php.fatal ships immediately but the surrounding http.request events get dropped — you see the crash without the traffic that caused it.
  • Plugin auto-update blind spots. A wp.state_change for a plugin activation drops in a high-traffic moment. Hours later the plugin misbehaves and you cannot tell when it went live.

Dropped events make every other Logystera signal less trustworthy. The platform surfaces the drop count rather than hiding it — but trust degrades the moment someone asks "did this really happen?" and the answer is "probably."

3. Why It’s Hard to Spot

WordPress does not surface this. There is no admin notice. There is no error_log entry by default. The plugin makes an explicit design choice not to noise the WordPress request log on every drop, because doing so would itself become a performance problem under exactly the conditions where drops occur.

Uptime monitors do not catch it. The site is up. Pages render. Logins work. From every external probe's perspective, everything is fine — and that is true, because the WordPress request itself is unaffected. The dropped event is a monitoring-pipeline event, not a user-facing event.

Your existing dashboards do not catch it either, unless you specifically look. Request rate looks normal because it is sampled from the events that did ship; a 30% drop rate just means your dashboards are reading 70% of reality and reporting it as 100%. The graph is smooth. The graph is wrong.

This is the silent failure mode unique to Logystera: the platform that is supposed to detect silent failures has its own silent failure mode at the edge. The only way it surfaces is if the plugin reports its own drop count back through a separate, smaller, higher-priority channel — which it does, via wp_buffer_dropped_total. That signal is the only honest thing about an otherwise misleading picture, which is why we expose it prominently rather than hiding it.

4. Cause

The Logystera WordPress plugin does not ship events synchronously. Doing so would couple the latency of every WordPress request to the round-trip time to the gateway, which is unacceptable. Instead, the plugin maintains a local buffer — implemented as a bounded WordPress option (or, on hosts that support it, an APCu/Redis-backed ring) — that captures audit events as they happen and ships them in batches on a periodic flush.

There are four buffer signals you need to know:

  • wp_buffer_dropped_total — a monotonically increasing counter of events the plugin captured but could not buffer. Every increment is one event lost forever. This is the primary signal. It is the smoking gun.
  • wp_buffer_size_percent — gauge, 0–100, the buffer's fill ratio (bytes used / bytes capacity). Rises before drops happen.
  • wp_buffer_pending_count — gauge, the number of events waiting to be flushed. A pending count that climbs and never falls means the flush worker is stuck.
  • wp_buffer_event_percent — gauge, the fraction of buffer capacity consumed by events versus headroom reserved for in-flight envelopes. Useful when individual events are unusually large.

The mechanism is bounded by design. When a WordPress request triggers a hook the plugin listens on (login, REST, option update, cron), the plugin does roughly this: build the event payload, attempt to enqueue into the buffer, and if the enqueue fails because the buffer is full, increment wp_buffer_dropped_total and return immediately. The WordPress request does not block. The user does not see a slow page. The dashboard sees a dropped event.

Drops happen for three structural reasons:

  1. Traffic outpacing flush cadence. The plugin flushes on a schedule (default every 30 seconds via wp-cron or a heartbeat). A traffic spike enqueues faster than the next flush.
  2. Gateway latency or unavailability. The flush worker holds the buffer open while it POSTs to the gateway. If the POST takes 10 seconds because of network or gateway pressure, the buffer keeps accepting events but cannot drain.
  3. Buffer capacity is too small for the entity's traffic profile. A 1 MB buffer holds maybe 800 events. A site with 500 req/s of authenticated traffic fills that in under two seconds.

wp_buffer_dropped_total rising in lockstep with wp_buffer_size_percent reaching 100 is the canonical pattern: the buffer is full, the plugin is shedding, the gateway is upstream of the symptom.

5. Solution

5.1 Diagnose (logs first)

Start from the entity dashboard. The buffer panel shows the four signals above. Confirm wp_buffer_dropped_total is non-zero and rising. Then narrow it down on the WordPress side.

Check the plugin's status endpoint. The plugin exposes its current buffer state at /wp-json/logystera/v1/status (admin-only, requires the entity API key):

curl -u 'apikey:SECRET' https://your-site.com/wp-json/logystera/v1/status | jq

You will get something like:

{
  "buffer_pending": 412,
  "buffer_capacity": 500,
  "buffer_size_bytes": 982016,
  "buffer_size_capacity": 1048576,
  "dropped_total": 2418,
  "last_flush_at": "2026-04-27T11:42:18Z",
  "last_flush_duration_ms": 8412,
  "last_flush_status": "ok"
}

A buffer_pending close to buffer_capacity and a last_flush_duration_ms over 5000 is the canonical "flush is too slow" pattern → produces wp_buffer_size_percent near 100 and rising wp_buffer_dropped_total.

Check the WordPress debug log. When WP_DEBUG_LOG is on, the plugin writes a single line per flush failure (not per drop) to /wp-content/debug.log:

grep -E "logystera|buffer" /wp-content/debug.log | tail -50

Patterns to look for and what each surfaces:

  • Logystera: flush timeout after 10000ms → gateway latency → produces a wp_buffer_size_percent climb followed by wp_buffer_dropped_total increments
  • Logystera: buffer full, dropping N events → produces wp_buffer_dropped_total += N
  • Logystera: gateway returned 429 → rate-limited at the edge → produces stalled flush, climbing wp_buffer_pending_count, then drops
  • Logystera: gateway connection refused → network or DNS issue → produces drops as buffer fills with no drain

Check PHP error log for cron stalls. If wp-cron is disabled or stuck, the flush never runs:

grep -E "wp-cron|DISABLE_WP_CRON" /var/log/php-fpm/error.log

A cron-stalled site shows wp_buffer_pending_count rising linearly, never falling, until it caps out and wp_buffer_dropped_total takes over.

Check gateway-side timing. Log into the Logystera dashboard, open the entity, and look at the gateway latency panel. If p95 ingest time is over 2 seconds, the WordPress flush worker is timing out at the edge before the gateway acks. That is a Logystera-side problem, not a WordPress-side problem, and it produces the same wp_buffer_dropped_total symptom.

Every diagnostic above ties back to the same chain: something prevents the flush from completing in time → wp_buffer_size_percent rises → buffer hits capacity → wp_buffer_dropped_total increments.

5.2 Root Causes

(see root causes inline in 5.3 Fix)

5.3 Fix

Address the most likely causes first.

Cause 1: Buffer too small for traffic profile. The default buffer is sized for moderate traffic (a few hundred req/s). A WooCommerce store at peak or a membership site with heavy REST usage outgrows it. Raise the buffer in the plugin settings or via constant:

// wp-config.php
define( 'LOGYSTERA_BUFFER_SIZE', 4 * 1024 * 1024 ); // 4 MB
define( 'LOGYSTERA_BUFFER_CAPACITY', 2000 );        // events

This produces an immediate drop in wp_buffer_size_percent and stops wp_buffer_dropped_total from incrementing under the same load. Do not raise it past 8 MB without monitoring memory; the buffer lives in PHP memory during flush.

Cause 2: Flush cadence too slow. Default flush is every 30 seconds. For high-volume sites, halve it:

define( 'LOGYSTERA_FLUSH_INTERVAL', 10 ); // seconds

Faster flush cadence means smaller batches and earlier drain. Surfaces as wp_buffer_pending_count staying low and wp_buffer_dropped_total flat. Do not go below 5 seconds — you will saturate the gateway with small POSTs.

Cause 3: wp-cron disabled or unreliable. If DISABLE_WP_CRON is true and you have no real cron driving /wp-cron.php, the flush never fires on its own. Confirm and fix:

wp eval 'echo defined("DISABLE_WP_CRON") && DISABLE_WP_CRON ? "yes" : "no";'
crontab -l | grep wp-cron

A real cron entry every minute is the correct setup. Surfaces as a return to flat wp_buffer_pending_count and zero new drops.

Cause 4: Signal cardinality blowout. A misconfigured plugin or theme is flooding one signal type — for example, a custom REST endpoint hit thousands of times per minute. The buffer fills with one repetitive event class while everything else gets dropped. Find the offender:

curl -u 'apikey:SECRET' https://your-site.com/wp-json/logystera/v1/status?breakdown=event_type | jq

If 80% of pending events are one event_type, throttle that signal in the plugin's signal config. This produces a drop in wp_buffer_event_percent and frees capacity for everything else.

Cause 5: Gateway latency. If last_flush_duration_ms is consistently over 5 seconds, the problem is upstream. Check the entity's region and network path to the gateway, and check the gateway latency panel in the Logystera dashboard. This is the only fix that requires our involvement; open a support ticket with the entity ID. Surfaces as wp_buffer_size_percent flat after fix.

Cause 6: Gateway 429s. Your entity is hitting an ingest rate limit. The gateway returns 429, the plugin backs off, the buffer fills. Confirm in debug.log (look for gateway returned 429). The fix is to raise the entity's ingest quota — a tenant-api setting on your billing plan — or reduce signal volume by disabling non-critical hooks.

5.4 Verify

The verification signal is wp_buffer_dropped_total. After applying a fix, watch the counter for 30 minutes under representative traffic. Healthy behavior:

  • wp_buffer_dropped_total flat (no new increments)
  • wp_buffer_size_percent oscillating between 10% and 60% — never sustained above 80%
  • wp_buffer_pending_count rising during request bursts and falling after each flush — sawtooth, not staircase
  • wp_buffer_event_percent stable, no single event type dominating

Quick check from the WordPress side:

# Snapshot now, wait 30 minutes, snapshot again — dropped_total should be unchanged
curl -u 'apikey:SECRET' https://your-site.com/wp-json/logystera/v1/status | jq '.dropped_total'

If the counter has not moved in 30 minutes under normal-to-peak traffic, the buffer is keeping up. If it has moved by even 10, the fix is partial and you need to address the next most likely cause from section 6.

In /wp-content/debug.log, the patterns buffer full and flush timeout should disappear:

grep -cE "buffer full|flush timeout" /wp-content/debug.log

A return value of zero on a 24-hour-old log file is the correct end state.

6. How to Catch This Early

Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.

This issue surfaces as wp_buffer_dropped_total.

This is the unusual case where Logystera is both the thing being monitored and the thing reporting the failure. The plugin is intentionally honest: when monitoring data is incomplete, you see it on the dashboard, you see a drop count on the entity status, and you see the four buffer signals trending. There is no version of this where the platform pretends the events shipped when they did not.

That honesty matters more for prospects than for existing customers. Plenty of monitoring tools fail silently at the edge — agent OOMs, network drops, queue overflows — and report 100% delivery anyway, because the agent has no out-of-band channel to admit the loss. Logystera publishes wp_buffer_dropped_total precisely so the gap is visible. If your audit trail has a hole, you know about it.

Operationally, the prevention story is rule-based. A Logystera rule on wp_buffer_dropped_total (rate over 5-minute window, threshold > 0) fires the moment any WordPress entity starts shedding. A second rule on wp_buffer_size_percent (sustained over 80% for 2 minutes) fires before drops start, giving you a window to act. Both are part of the default WordPress rule set and require no custom configuration.

The failure mode is real, the detection is built in, and the resolution path is the six causes in section 6. Treat wp_buffer_dropped_total the same way you treat php.fatal: every increment is a question you need to answer.

7. Related Silent Failures

  • wp.cron type=missed_schedule — when wp-cron stalls, the plugin's flush job is one of the things that stops running, so a cron failure cascades into buffer drops within minutes.
  • http.request 5xx correlation — gateway pressure (5xx from the ingest path) and wp_buffer_dropped_total rise together; treat them as a single incident.
  • wp.state_change near drop windows — a plugin or theme update that lands during a drop window may itself be the missing event you cannot see; cross-reference with active_plugins_hash snapshots.
  • auth.attempt rate-limit underreporting — credential-stuffing detection thresholds assume complete event delivery; a sustained drop window can hide an attack in progress.
  • memory_near_limit on flush worker — large buffer + large batch + slow gateway can push the flush request itself into PHP memory pressure, producing a php.fatal on the cron worker that takes the next flush down with it.

See what's actually happening in your WordPress system

Connect your site. Logystera starts monitoring within minutes.

Logystera Logystera
Monitoring for WordPress and Drupal sites. Install a plugin or module to catch silent failures — cron stalls, failed emails, login attacks, PHP errors — before users report them.
Company
Copyright © 2026 Logystera. All rights reserved.