Guide

Drupal module installed without your knowledge — building an audit trail

You open /admin/modules and a row catches your eye. A module you do not recognize is enabled. Maybe a generic name like Admin Tools or Field Permissions Pro.

1. Problem

You open /admin/modules and a row catches your eye. A module you do not recognize is enabled. Maybe a generic name like Admin Tools or Field Permissions Pro. Maybe something blatantly wrong like php — the legacy PHP filter, which has no business being on a production site. The status is "Enabled."

You did not install it. The other admins on your team did not install it. There is no deployment ticket, no commit in your config repo, no row in your CI history that explains it. Drupal's UI shows you what is currently enabled — it does not tell you who enabled it, when, or from which session.

This is the search-intent collision behind "drupal module installed without my knowledge." The site is up, content is rendering, the status report is mostly green. But a module is sitting in your active configuration that nobody on your team put there. To audit who installed a module in Drupal, the core admin gives you nothing useful — you have to reconstruct it from logs, specifically the module.install signal that fires the moment a module is enabled, plus the auth and HTTP signals around it.

2. Impact

A Drupal module that arrives without a paper trail is one of the highest-severity silent failures on the platform. The realistic scenarios:

  • Compromised admin or User 1 session. Stolen password, reused credential, hijacked session cookie. The attacker logs in and enables a module that gives them code execution or persistence.
  • PHP filter abuse. A legacy module like php or contrib code that allows arbitrary PHP execution gets enabled — a web shell with no file drop required.
  • Drush from a compromised shell. An attacker with SSH or a deploy key runs drush en module_name. No HTTP request, no admin session, but the module is enabled in active config.
  • Config import drift. drush config:import from a tampered config tree enables a module nobody reviewed. The change looks legitimate because it came through the official config workflow.
  • Update hook side effect. A contrib module's hook_update_N enables a sub-module or dependency you did not realize was bundled.
  • Insider mistake. A contractor clicked "Install" on a module they were "just trying out," forgot, and never disabled.

The damage path on Drupal is fast. The php filter alone gives direct PHP execution through any text format the attacker can edit. A module with hook_cron runs whatever code it wants every cron tick. A module that registers routes can quietly accept POSTs to /some-innocuous-path and act as a backdoor. None of this triggers an uptime check, none of it shows up in pageview reports. Every hour the unknown module is enabled is an hour the attacker has working code on your server.

3. Why It’s Hard to Spot

Drupal core's audit story for module changes is shallow. The dblog module writes a single watchdog entry like module installed: php — severity info, in a table nobody reads, with no actor IP and no request context. If dblog is disabled (it often is on high-traffic sites because of write load), even that record is gone.

The admin itself shows current state and nothing else. /admin/modules is a checkbox list. There is no "installed by" column, no timestamp visible to admins, no diff view between previous and current core.extension. The Status report does not flag unknown modules.

Uptime monitors miss it completely. A module enable does not change a single byte of rendered HTML on the homepage. Synthetic checks pass. If the module registers a route at /backdoor-x9, no monitoring you have is watching that path.

Email notifications? Drupal core sends none for module installs. Security modules can be configured to alert, but they store state in the same database the attacker just compromised, and they can be uninstalled in the same session that enabled the malicious module — silencing themselves before they ever fire.

The config import workflow is its own blind spot. If CI runs drush config:import on every deploy, an enabled module in core.extension.yml is installed silently as part of the import. No review step, no diff alert, no "are you sure."

This is the textbook Drupal silent failure: a structural code-surface change with serious security implications, native logging that is shallow at best and absent at worst, no visible symptom until somebody notices an unfamiliar row.

4. Cause

When something enables a Drupal module — through /admin/modules, through drush en, through \Drupal::service('module_installer')->install(), through config:import, or through a database edit to core.extension — Drupal's module installer fires a sequence of internal events. The Logystera Drupal agent hooks the ModuleInstaller service and emits a single normalized signal: module.install.

A module.install event is the audit record of a structural change to the site's code surface. Its payload includes:

  • actioninstall, uninstall, or dependency_install
  • module — the machine name (e.g. php, webform, rest, views_ui)
  • module_version — the version string from the module's .info.yml
  • actor_uid — the Drupal user ID (or 0 for anonymous / Drush / cron / config import)
  • actor_name — the username
  • request_ip — source IP, or cli for Drush / Drupal Console
  • request_uri — typically /admin/modules/install or /admin/modules POST, or /admin/config/development/configuration/full/import for config imports
  • triggerui, drush, config_import, update_hook, api
  • timestamp

This is the signal that answers "who installed this module." Not the filesystem timestamp on web/modules/contrib//, not the core.extension.yml diff. The signal, captured at the moment the installer ran, with the actor and trigger context attached.

Supporting signals fill in the rest of the chain. auth.login_success and auth.login_failed show the session that produced the action — including failed attempts that preceded the success, the credential-stuffing tell. http.request shows hits to /admin/modules, /admin/modules/install, and /admin/modules/list/confirm. config.import events fire when active config is updated through the import workflow — the alternative path a module.install can travel. Together these four signals reconstruct: who logged in, from where, whether they failed first, what admin pages they touched, and what changed in active config.

5. Solution

5.1 Diagnose (logs first)

Stop staring at /admin/modules. Go to logs.

1. Drupal watchdog (dblog) — the native breadcrumb, if it is on.

drush watchdog:show --type=system --severity=info --count=200 | grep -iE "install|enable|module"
drush sql:query "SELECT timestamp, uid, hostname, message FROM watchdog WHERE type='system' AND message LIKE '%install%' ORDER BY wid DESC LIMIT 50"

This surfaces the same data Logystera reads to produce module.install. Note the uid (actor) and hostname (request IP) fields. If uid=0 and hostname looks like a private/internal IP or cli, the install did not come through a logged-in admin session.

2. Web server access logs — find the install HTTP request.

The UI install path is POST /admin/modules followed by POST /admin/modules/list/confirm. A direct install via the install link is POST /admin/modules/install.

grep -E "POST /admin/modules" /var/log/nginx/access.log
grep -E "/admin/modules/list/confirm|/admin/modules/install" /var/log/nginx/access.log | awk '{print $1, $4, $7, $9}'

These are the events that produce http.request signals with request_path=/admin/modules and method POST. Cross-reference the timestamps with watchdog.

3. PHP error log — confirm the install ran cleanly or check for errors.

A failed install (an hook_install exception, a missing dependency, a schema clash) leaves traces:

grep -iE "ModuleInstaller|hook_install|module_install" /var/log/php-fpm/error.log
tail -f /var/log/php-fpm/www-error.log

4. Active config and the core.extension object.

drush config:get core.extension --include-overridden
drush sql:query "SELECT name, data FROM config WHERE name = 'core.extension'"

core.extension is the canonical list of enabled modules in active config. Compare it against your repo's config/sync/core.extension.yml:

diff <(drush config:get core.extension --format=yaml) config/sync/core.extension.yml

Any module in active config but not in sync is a configuration drift event — and produces a config.import (or absence-of-import) signal.

5. Drush / shell history — Drush runs do not produce HTTP requests.

grep "drush" /var/log/auth.log
grep -E "drush (en|pm:install|pm-enable)" /home/*/.bash_history /root/.bash_history 2>/dev/null

If the install came from the CLI, actor_uid=0 on the module.install signal and request_ip=cli. That is a much more serious finding than a logged-in admin doing something unexpected — it means somebody had shell.

6. The signal you actually want: module.install.

If the Logystera Drupal agent is installed, every install, uninstall, and dependency install has been emitted as module.install with full actor and trigger context. Filter on event_type=module.install AND action=install, then correlate:

  • auth.login_success and auth.login_failed for the same actor_uid in the preceding 60 minutes — a burst of failures followed by one success and then a module.install is the kill chain.
  • http.request events for /admin/modules* from the same request_ip in the same window.
  • config.import events in the same window — if one fired, the install came through config import, not the UI.

The four-signal correlation tells you who logged in, what they failed on first, what admin pages they touched, what trigger fired the install, and what module ended up enabled.

5.2 Root Causes

(see root causes inline in 5.3 Fix)

5.3 Fix

Map the cause to the signal pattern, then fix the underlying access path.

Cause A: Compromised admin session via UI. Signal pattern: module.install with trigger=ui and actor_uid of a real admin, preceded by auth.login_success from an unfamiliar IP — often preceded by a burst of auth.login_failed against the same username. Fix: invalidate all sessions (drush sql:query "TRUNCATE sessions"), rotate every admin password, enable TFA via the tfa module, audit users_field_data for accounts you did not create, review user__roles for unexpected administrator grants. Uninstall the unknown module (drush pmu ), remove its files, rebuild cache.

Cause B: Drush / shell foothold. Signal pattern: module.install with trigger=drush, actor_uid=0, request_ip=cli. No corresponding auth.login_success, no http.request to /admin/modules. Fix: this is a server-level compromise. Audit auth.log for SSH access, rotate SSH keys, check ~/.ssh/authorized_keys for every web user, review CI/CD deploy keys. Restore from known-good backup if integrity cannot be proven.

Cause C: Config import drift. Signal pattern: module.install with trigger=config_import, paired with a config.import event in the same second, actor_uid matching the deploy user. Fix: review the imported diff. Check git history on config/sync/core.extension.yml for an unauthorized commit. Add a CI check that fails if core.extension.yml adds a module not on an allow-list.

Cause D: Dependency pull-in. Signal pattern: module.install with action=dependency_install, fired alongside a parent module the team installed intentionally. Fix: usually benign — but document it. If the dependency is suspicious (unmaintained, security-flagged), pin or replace the parent.

Cause E: PHP filter or known-dangerous module enabled. Signal pattern: module.install where module is on a high-risk list — php, php_filter, devel_generate, contrib modules with active CVEs. Fix: uninstall immediately (drush pmu php && drush cr), purge text formats that referenced PHP filter, audit nodes and blocks for executable PHP, treat the site as compromised. PHP filter on production is the closest Drupal equivalent to "drop me a shell, please."

Cause F: Update hook side effect. Signal pattern: module.install fired during a drush updb window, trigger=update_hook. Fix: read the contrib module's .install file to confirm intent. If unintended, file upstream and pin the previous version.

5.4 Verify

You are looking for two things: the absence of unauthorized module.install events, and the absence of the access path that allowed them.

Signals that should stop appearing:

  • No new module.install events from compromised accounts (those accounts should be locked, rotated, or deleted).
  • No new module.install events with actor_uid=0 and trigger=drush unless tied to a documented deploy window.
  • No new module.install events with trigger=config_import unless tied to a reviewed CI pipeline run.
  • No module.install events for modules not on your team's approved list.

Signals that should appear normally:

  • auth.login_success only from known admins on known networks, with no auth.login_failed bursts beforehand.
  • http.request to /admin/modules correlated only with deliberate, documented admin sessions.
  • config.import only during scheduled deploys.

What to grep:

drush watchdog:show --severity=info --count=500 | grep -iE "module.*install"
grep "module.install" /var/log/logystera/agent.log | tail -100
grep -E "actor_uid=0|trigger=drush" /var/log/logystera/agent.log
diff <(drush config:get core.extension --format=yaml) config/sync/core.extension.yml

Timeframe: monitor for 72 hours of normal traffic, including at least one deploy cycle. If no unexpected module.install events appear, no auth.login_failed bursts spike against admin accounts, active core.extension matches sync config, and /admin/modules matches your documented expected state, the immediate incident is resolved.

A clean state looks like: zero module.install events on most days, occasional ones tied to a known deploy with a corresponding config.import and a CI run ID, never a module.install with actor_uid=0 outside a deploy window. Anything else is a question that needs an answer.

6. How to Catch This Early

Fixing it is straightforward once you know the cause. The hard part is knowing it happened at all.

This issue surfaces as module.install.

The hard part is not removing an unknown module. The hard part is knowing it was installed in the first place.

Drupal core gives you a shallow watchdog row and nothing else. The dblog table sits inside the same database the attacker just got access to. The status report does not flag unknown modules. Email notifications for installs do not exist. Any audit that lives inside the application it audits has the same fragility as the application itself.

This is exactly the kind of failure that requires log intelligence sitting outside the Drupal process. Every module install, uninstall, and dependency install produces a module.install signal that Logystera captures the moment Drupal's ModuleInstaller service runs, with acting user, IP, trigger type, and request context attached. The signal leaves the Drupal instance immediately and is correlated with auth.login_success, auth.login_failed, http.request to /admin/modules*, and config.import to reconstruct the full sequence.

A rule on top of module.install — fire on any install outside business hours, any install with actor_uid=0 not tied to a known deploy window, any install of a module not on the approved allow-list, any install of a high-risk module like php — turns a blind spot into a real-time alert. The goal is not to slow legitimate development. The goal is to make sure no module change happens without being seen, attributed, and either expected or escalated within minutes.

7. Related Silent Failures

Other module.install and adjacent Drupal failures worth watching:

  • Module silently uninstalled. module.install with action=uninstall targeting a security module (security_review, seckit, tfa) — the Drupal equivalent of disabling the alarm before the break-in.
  • Admin role granted unexpectedly. Surfaces as a config.import or direct DB write to user__roles, often paired with a fresh auth.login_success from an unfamiliar IP.
  • PHP filter text format created. config.import adding a text format that allows the php_code filter — a web shell through any node body the attacker can edit.
  • REST or JSON:API endpoints enabled. module.install for rest, jsonapi, or restui, followed by http.request enumeration of /jsonapi/user/user. Recon-then-foothold pattern.
  • Failed login burst preceding a config change. auth.login_failed spike, one auth.login_success, then module.install or config.import. The credential-stuffing kill chain in three signals.

Each of these is invisible in the Drupal admin. Each one is a single signal away from being obvious.

See what's actually happening in your Drupal system

Connect your site. Logystera starts monitoring within minutes.

Logystera Logystera
Monitoring for WordPress and Drupal sites. Install a plugin or module to catch silent failures — cron stalls, failed emails, login attacks, PHP errors — before users report them.
Company
Copyright © 2026 Logystera. All rights reserved.