The architecture behind log-derived failure detection
A log-first pipeline that turns the events your applications already emit into deterministic metrics, calibrated alerts, and dashboards engineers actually trust.
Logystera is not generic observability. It is a domain-specific intelligence layer for WordPress, Drupal, and HashiCorp Vault — built around one principle: the truth about your system is already in its logs. Read it correctly, derive metrics deterministically, and surface what matters.
The pipeline
From application event to actionable signal
Five components, one direction of flow. Each does one job and does it well. The Processor — the brain — is where rules and metrics live.
Plugin / Agent
Source-side capture
- Reads core hooks
- Buffers locally
- Async dispatch
- HMAC-signed
Gateway
Stateless edge
- HMAC verification
- Rate-limit per entity
- Normalize envelopes
- Hand off to queue
Processor
The brain
- Match metric definitions
- Evaluate detection rules
- Derive metrics
- Update entity snapshots
Storage
Time-series + state
- Metrics in time-series
- Snapshots in cache
- Audit data persisted
- Encrypted at rest
Dashboards & alerts
UI for humans
- Per-site dashboards
- Email / webhook alerts
- Drilldown to event
- Role-based access
Plugin / Agent
Reads core hooks · Buffers locally · Async dispatch · HMAC-signed
Gateway
HMAC verify · Rate-limit · Normalize · Queue
Processor (the brain)
Match metrics · Evaluate rules · Derive signals · Update snapshots
Storage
Time-series · Snapshots · Audit · Encrypted
Dashboards & alerts
Per-site dashboards · Alerts · Drilldown · RBAC
Each component is independently scaled and replaceable. The Gateway is stateless — kill any task and traffic re-routes. The Processor runs as horizontally-scaled workers. Storage is split: hot state in cache, derived metrics in a time-series store, durable audit data in PostgreSQL.
How detection actually works
Three layers of intelligence
A monitoring product is only as good as what it does with the events it receives. Here is what happens between a wp_mail() call on your site and an alert in your inbox.
Signal extraction
We read structured events your application already emits — every wp_mail() call, every cron tick, every authentication attempt, every PHP fatal error, every Drupal queue worker run. We don’t grep raw text logs.
Our plugins tap the WordPress and Drupal hook systems at well-defined points. That makes signals deterministic instead of pattern-matched. The same event always produces the same signal — no false positives from log format changes.
Examples of captured signals:
- ·
http.request— every page load - ·
wp.email— every wp_mail() call - ·
cron.tick— every scheduled hook - ·
auth.attempt— every login attempt - ·
php.error— every error/notice/fatal
Rule matching
A library of detection patterns derived from real production failures — silent cron, mail-queue stalls, brute-force ramps, REST-API spikes, slow admin pages, plugin-update flips. Patterns are deterministic: same inputs always yield the same alerts.
Each rule has a condition, a threshold, a time window, and a suppression policy. Thresholds are tuned on real customer data, not synthetic load. We calibrate them to fire when they should — not to fire less than other tools.
Rule shapes we ship:
- · Counter-zero (cron didn’t run)
- · Rate-spike (login attempts surged)
- · Ratio-shift (email failures up vs baseline)
- · State-change (plugin deactivated)
- · Latency-percentile (P95 admin response)
Derived metrics
Every signal becomes a time-series metric. cron_execution_count, wp_mail_failure_rate, login_attempts_per_endpoint, php_fatal_count, queue_lease_age_seconds.
Metrics drive both dashboards and alert thresholds. They are the same series — the alert that wakes you at 3 AM is reading the same numbers you see on your dashboard at 10 AM. No drift between the two.
Metric properties:
- · Counter, gauge, or histogram
- · Labelled by entity, environment, route
- · Aggregation-friendly across sites
- · Source-time stamped (not ingestion-time)
- · Retention configurable per plan
Engineering credibility
Built for controlled cardinality
Anyone who has run a production observability stack knows where they go to die: a runaway label that explodes the time-series count from thousands to millions overnight. We designed the metric model around the constraints that make that impossible.
Pre-defined label sets
Labels are declared per metric: tenant_id, entity_id, signal_type, plus a small fixed set per signal. No user-injected high-cardinality fields. No request URLs as labels. No raw user IDs.
Deterministic evaluation
Events are processed in order per entity. Rule firing is based on accumulated state, not parallel reads. Same input stream produces the same alerts and the same metric series — every time. Replays are reproducible.
Backfill-correct timestamps
Every event carries the source timestamp from when it actually happened — not when the gateway received it. Network delays, retry queues, and ingest lag don’t corrupt your timeline. Historical replay works.
Performance & scale
Designed for production traffic
Concrete numbers. These are the operating envelope of the current platform — designed for, validated under load, and improving as we go.
< 5ms
Plugin-side overhead
Per page request, with async dispatch. Buffered locally and flushed in the background — your site response time is unaffected.
10K/s
Events / tenant gateway
Designed throughput per tenant ingest path. Horizontal scaling for higher rates. Rate-limit kicks in before backpressure hits.
< 1 min
End-to-end latency
From event emission to alert delivery in normal operation. Most events flow through in under 15 seconds. Built-in slack for queue spikes.
99.9%
Audit durability
For stored signals once written to durable storage. Plugin-side buffer absorbs short network outages — no data loss for typical hiccups.
We don’t claim billion-events-per-second. We are a young product. These are the real numbers — they will improve. We’d rather give you accurate operating bounds today than headline specs we can’t back up.
Data handling
What we capture, where it lives, who controls it
Monitoring tools that aggregate sensitive data become a target. We architected the data model to minimize what we capture and maximize what you control.
What is captured
- Structured event metadata: route, status code, timestamp, signal type
- Source plugin / module identifiers (which code emitted the signal)
- Numeric measurements: response time, memory, query count
- Authentication outcomes (success / failure, not credentials)
- Aggregated counts: cron runs, mail attempts, REST calls per route
What is NOT captured
- Form contents, comment bodies, post drafts, user-typed text
- Passwords, tokens, API keys — ever, in any form
- PII fields: email content, names, phone numbers, addresses
- Raw access logs or full request/response bodies
- Database query parameters with potentially sensitive values
Where it lives
EU primary region (Frankfurt). Encrypted at rest with AES-256. TLS 1.2+ in transit. Regional data-residency options on the roadmap for North America and APAC.
Retention
Configurable per plan: 7 days (Free), 30 days (Pro), 90+ days (Enterprise). After retention, signals are deleted — no shadow archive. Per-tenant TTL is enforced at the storage layer.
Data control
Customers own their data. We do not train AI models on customer signals. We do not resell, share, or anonymize-and-monetize. Account deletion purges your data on request.
Why this exists
— Logystera was built to productize that workflow. Read the logs. Derive the metrics. Watch them so the team doesn’t have to.
See the architecture in action on your stack
Connect a WordPress, Drupal, or Vault entity in under five minutes. Logystera derives metrics and alerts from your existing logs — no infrastructure changes, no agents to configure.