December 17, 2025
When AI Assistants Get Promoted: Why Attackers Are Using AI as Operators, and Why Defenders Should Too

For years, discussions about AI in security have focused on trends, novelty and speculative risk. None of that matters to an adversary. What matters is leverage.
AI’s value in offense is not creativity or intelligence. It is friction removal, time compression and scale. Where leverage appears, it is exploited immediately, without governance, ethics reviews or strategic debate. That dynamic — not model capability — is why offensive adoption has been fast and inevitable.
Most defensive thinking still relies on an outdated mental model: a human attacker who occasionally consults an AI tool to speed up discrete tasks. That model is no longer representative of high-impact threats.
What is happening instead is a shift in labor.
AI as Operator, Not Helper
At the upper end of the threat spectrum, AI is not an assistant embedded in a human workflow. It is an operator executing bounded tasks continuously, with humans supervising objectives and constraints rather than keystrokes.
This distinction matters.
An assistant accelerates human effort but remains gated by human attention, availability and decision latency. An operator replaces human effort for well-scoped work and runs without pause.
Across mature offensive operations, entire stages of the kill chain are transitioning from human-executed to machine-executed:
- Reconnaissance is no longer periodic. It is continuous, correlation-driven and multi-source. Asset changes, commits, configuration drift, leaked credentials and exposed services are fused into a live targeting model that updates around the clock.
- Exploitation is no longer precious. Payloads do not need to be handcrafted or conserved. They can be generated, tested, discarded and regenerated at scale until a viable execution path emerges.
- Lateral movement increasingly resembles a graph problem. Networks are treated as state spaces where identities, privileges and trust relationships are evaluated in parallel, not explored sequentially by a human operator.
None of this is speculative. It is the predictable outcome of removing human latency from procedural work. Much of this automation existed before modern AI; what has changed is the cost, accessibility and adaptability of running it continuously.
The Collapse of Time-Based Defense
Traditional security operations assume time as a defensive resource: time to detect, time to escalate, time to decide.
That assumption is no longer reliable.
AI-driven offense collapses dwell time not because machines are smarter than humans, but because they do not stop. Privilege escalation attempts run in parallel. Lateral movement begins immediately after access is gained. Failed paths are not investigated or debated; they are abandoned and replaced instantly.
By the time a human analyst reviews an alert, the environment may already be different. Credentials have rotated. Artifacts have mutated. The access path has shifted. Defensive processes designed around human decision loops struggle not because they are poorly staffed or under-tooled, but because they are structurally slower than the adversary they face.
Social Engineering Without Obvious Signals
Much defensive training still emphasizes spotting anomalies: typos, urgency, awkward phrasing or emotional manipulation. That model reflects a world where social engineering quality was constrained by human effort.
Modern AI-driven social engineering removes those constraints.
Successful attacks increasingly look mundane rather than suspicious. They rely on pattern replication, not psychological insight. Tone, timing, vocabulary and internal context can be synthesized to match organizational norms with high fidelity. These messages succeed precisely because they blend into routine workflows and expected communications.
The signal defenders are trained to look for is disappearing, not because defenders are inattentive, but because the attack surface has normalized.
The Economics of Near-Zero Marginal Cost
The most consequential shift is economic, not technical.
AI does not make attackers more skilled. It makes failure inexpensive.
When the marginal cost of an attempt approaches zero, persistence becomes effectively unbounded. Blocked domains are rotated automatically. Detected behaviors are abandoned without regret. There is no sunk cost to recover and no human fatigue to manage.
Many defensive controls still assume that repeated friction will cause an attacker to disengage. That assumption only holds when the attacker is expending human labor. Software does not tire, renegotiate priorities or move on because of inconvenience. Every failed attempt simply becomes another data point.
Build Defensive Operators, Not AI Assistants
Resilience in an operator-driven threat environment is not achieved by adding more analyst capacity or adopting “AI for SOC” features. It is achieved by redesigning defense so the default path from signal to action is machine-speed, policy-bounded and continuously enforced. The practical goal is to eliminate the attacker’s cheapest advantage — time — by ensuring routine detections trigger predetermined containment and verification steps automatically, with humans supervising exceptions and updating constraints rather than manually driving every decision.
Here’s what resilient defenders do differently:
- Make identity the primary perimeter. Assume initial access is routine; harden authentication and session integrity: phishing-resistant MFA, device posture checks, conditional access, short-lived tokens, continuous session evaluation and rapid credential/session revocation.
- Continuously model exposure. Run defender-side “recon” as a first-class function: external attack surface monitoring, asset inventory accuracy, configuration drift detection, leaked credential discovery and prioritization tied to reachable paths, not CVSS alone.
- Contain lateral movement by default. Treat internal networks as hostile terrain: Enforce segmentation, restrict east-west, tier admin access, remove standing privileges, harden AD/AzureAD/Okta trust paths and monitor/limit credential material exposure.
- Precompute decisions with policy, not people. Convert common incident classes into deterministic playbooks with bounded actions and rollback conditions. Humans set objectives and constraints; automation executes.
- Assume social engineering will look normal. Shift controls from “spot the phish” to workflow integrity: verified sender identity, enforced out-of-band verification for payment/credential changes, strong DMARC/SPF/DKIM posture and approval chains that are cryptographically and procedurally hard to spoof.
Where the Real Divide Lies
The gap between resilient and vulnerable organizations is increasingly architectural rather than technological: It is a difference in operating model and tempo.
Attackers are already using AI as an operator — automating repeatable work continuously, with humans supervising objectives and constraints. Many defenders, by contrast, adopt AI as an assistant layered onto workflows still paced by human attention. Both sides may deploy similar models and tools; the divergence is in how those tools are embedded into the system.
An assistant amplifies human throughput. An operator changes system dynamics.
Adding AI to human-speed processes does not resolve the underlying latency mismatch. If the fastest decision-maker in the loop is still a person, the defender’s effective response rate is bounded by review queues, handoffs and escalation paths — constraints that do not bind an adversary running parallel attempts at near-zero marginal cost.
AI did not create new classes of attacks. It removed the practical limits on speed, scale and persistence. Defenses built around human limitations — reaction time, attention, fatigue — were never designed to face an opponent that operates continuously, learns from failure instantly and treats each blocked attempt as free feedback.
The threat model has shifted from episodic intrusion to continuous pressure. Resilient programs accept that shift and rebuild accordingly: They pre-authorize actions, automate the common cases and enforce policy at machine speed so routine attacker iteration triggers containment rather than escalation. This is not a tooling gap or a “mindset” slogan; it is a consequence of how the system is designed to observe, decide and act.
Published By: Chris Neuwirth, Vice President of Cyber Risk, NetWorks Group
Publish Date: December 17, 2025




