
At RSA Conference 2026, security leaders put hard numbers on a problem that’s been building for years: adversaries are moving faster, AI agents are everywhere, and most of today’s security operations tooling still assumes humans are the ones driving activity.
CrowdStrike CEO George Kurtz told attendees that the fastest recorded adversary breakout time has now dropped to 27 seconds. On average, attackers need just 29 minutes to move laterally after initial compromise, down from 48 minutes in 2024. That shrinking window is all the time a defender has to detect and contain a threat before it spreads across the environment.
At the same time, CrowdStrike’s own data shows how quickly AI is reshaping the endpoint. According to the company, its sensors now detect more than 1,800 distinct AI applications running on enterprise endpoints, representing nearly 160 million unique application instances. Each of those applications generates detection events, identity events and data access logs that flow into SIEM platforms originally built for human-speed workflows and human-driven investigations.
While AI agents are rapidly appearing on endpoints and in workflows, most enterprises are still holding back from full-scale deployment. Cisco found that 85% of surveyed enterprise customers already have AI agent pilots underway. Yet only 5% have moved those agents into production, according to Cisco President and Chief Product Officer Jeetu Patel in an RSAC blog post.
That 80-point gap underscores a central concern: security teams cannot yet answer basic but critical questions about agent behaviour and accountability. The open issues include:
- Which agents are actually running across the environment?
- What are those agents authorized to do?
- Who is responsible when an agent acts in an unintended or harmful way?
Without clear answers, many organisations appear to be experimenting with AI agents in controlled pilots while hesitating to expose core production systems to the same tools.
Security complexity itself is becoming a top concern as AI tools multiply. “The number one threat is security complexity. But we’re running towards that direction in AI as well,” said Etay Maor, VP of Threat Intelligence at Cato Networks, speaking to VentureBeat during RSAC 2026. Maor, who has attended the conference for 16 consecutive years, warned that enterprises are layering on “multiple point solutions for AI,” creating what he described as the next wave of security complexity.
That complexity is amplified by a fundamental visibility issue: in most default logging configurations, activity initiated by AI agents looks the same as activity initiated by humans in security logs. As a result, security operations teams face an avalanche of events where the origin—human user or autonomous agent is obscured.
This lack of differentiation makes it harder to understand:
- Whether a risky action was a conscious human choice or the output of an automated agent
- How to attribute responsibility and enforce policies when behaviour crosses a line
- Which controls to adjust when an incident involves both humans and agents acting in the same identity context
Against a backdrop where attackers need seconds to break out and minutes to spread, the combination of faster threats, AI-saturated endpoints and indistinguishable agent activity in logs is putting fresh pressure on security operations centres and the tools they depend on.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







