
AWS has detailed a sweeping cyber campaign in which attackers used off-the-shelf generative AI tools to compromise more than 600 internet-exposed FortiGate firewalls across 55 countries in just over a month.
The activity, which ran from mid-January to mid-February, shows how generative AI is helping relatively low-skilled, financially motivated groups automate attacks that once required larger, more experienced teams.
According to AWS’s incident report, the Russian-speaking group behind the campaign did not rely on novel zero-day exploits. Instead, they took a volume-based approach: scanning the internet for FortiGate management interfaces, testing weak or commonly reused credentials, and then moving quickly once they gained access.
Once a firewall was breached, the attackers pulled configuration files that held sensitive details, including:
- Administrator and VPN credentials
- Network topology information
- Firewall rules
Those configuration files effectively served as a roadmap to victim environments. Using that insight, the group pushed deeper into networks, targeting systems such as Active Directory, harvesting more credentials, and looking for lateral movement paths. Backup platforms, including Veeam servers, were also among the systems they sought to access.
AWS says the group leaned on multiple commercial generative AI tools throughout this process. The tools were reportedly used to generate attack playbooks, scripts, and operational notes, indicating that AI was integrated across the workflow rather than used just for occasional code snippets.
Investigators found evidence of AI-generated code and planning artifacts on compromised infrastructure. The nature of the tooling suggested that the campaign’s sophistication was less about elite human development skills and more about what could be quickly assembled with AI assistance.
“The volume and variety of custom tooling would typically indicate a well-resourced development team,” said CJ Moses, CISO at Amazon. “Instead, a single actor or very small group generated this entire toolkit through AI-assisted development.”
AWS notes that the custom tools observed in the incident were functional but far from polished. Parsing logic was described as simplistic, and the code contained redundant comments that pointed to a machine-generated first draft. Despite that, the automation was effective enough to drive a broad campaign across dozens of countries.
The attackers appeared to favour speed and breadth over persistence. When they encountered defences that made progress difficult, they often abandoned those targets and moved on to easier ones. This behaviour underscores the opportunistic nature of the campaign: the goal was to compromise as many exposed systems as possible with minimal effort per target.
The geographic spread was wide and not sharply focused on any particular country or sector. Victims were scattered across parts of Europe, Asia, Africa, and Latin America. AWS observed clusters of activity that may point to compromises of managed service providers or shared environments, raising the possibility of amplified downstream impact when a single breach opened doors to multiple customer networks.
While the attack’s AI angle is notable, the defensive guidance from AWS centres on long-standing fundamentals. The report stresses that relatively basic security hygiene could have blocked much of the campaign:
- Keeping management interfaces, such as firewall admin consoles, off the public internet
- Enforcing multi-factor authentication (MFA) for administrative access
- Avoiding password reuse and weak credentials
Because the attackers relied heavily on exposed interfaces and known-weak authentication practices, organizations that had already locked down access paths and enforced stronger identity controls would likely have been far less attractive targets.
The incident also fits into a broader pattern highlighted by other major providers. AWS’s findings come only weeks after Google warned that criminals are increasingly wiring generative AI directly into their operations, including using tools like its Gemini chatbot for reconnaissance, target profiling, phishing, and elements of malware development.
Taken together, these reports suggest that generative AI is becoming part of the standard toolkit for cybercrime groups, lowering the barrier to entry for complex, multi-step operations and allowing small teams or even single operators to run wide-reaching campaigns.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







