AI-Orchestrated Cybersecurity Threats

Why everyone was expecting an AI-driven cyber-attack, and the lessons learned from how it unfolded.

banner image

Understanding AI-driven cyber threats like the GTG-1002 attack

In November 2025, Anthropic publicly disclosed the first documented large-scale cyber intrusion executed primarily by an autonomous artificial intelligence (AI) system 1. A Chinese state-sponsored operator—designated GTG-1002—manipulated Anthropic's Claude Code into conducting approximately 80–90% of a multi-target espionage campaign against roughly 30 organizations, including major technology firms, financial institutions, chemical manufacturers, and government agencies.

While only a handful of intrusions fully succeeded, the campaign demonstrated machine-speed orchestration that no human team could match.

This was not science fiction. It was an operational reality—and it represents a fundamental shift in the cybersecurity threat landscape.

How they did it

GTG-1002 used no novel malware, no zero-days, no custom tools—only publicly available penetration testing frameworks automated at machine speed2. The campaign demonstrated that attackers have adopted the Centaur paradigm3, which combines human strategic oversight with AI tactical execution. This human-AI teaming model, originally developed in competitive chess where amateurs with computers defeated grandmasters, provides a decisive advantage over pure-human operations.

This created 2 AI powered attack classes:

  1. Autonomous Kill Chain Orchestration (AKO): AI sequenced thousands of micro-tasks across 30+ targets into a coherent, multi-stage intrusion campaign—compressing what would take human teams' weeks into hours. Human operators intervened at only 4–6 critical decision points per campaign. Everything else happened autonomously at machine speed.
  2. Cognitive Exploitation of AI Systems (CEAS): Attackers manipulated Claude through social engineering, not code vulnerabilities. They convinced the AI it was performing legitimate penetration testing. CEAS exploits will proliferate because they require no advanced technical skills, only a basic understanding of how AI systems interpret context and intent.

Together, these represent a structural change in the threat landscape. Because of this shift, mean time to detection (MTTD), mean time to validation (MTTV), and mean time to response (MTTR) must now approach machine speed across multiple detection surfaces. Defenders who remain in pure-human operational models face a structural disadvantage.

Why this AI-driven cyber threat worked

Cybersecurity principles did not fail—execution did. The attack succeeded because:

  • Accounts had excessive privileges
  • Networks were flat and unmonitored
  • Data was neither classified nor protected
  • APIs were exposed by default
  • Patching was too slow
  • Trust was implicit rather than verified

GTG-1002 succeeded not by breaking the rules but by exploiting organizations that never properly implemented them.

When AI powered attacks are executed at machine speed across dozens of targets simultaneously, there is no time for manual intervention, no room for human detection, and no second chances. Defense must be autonomous, architectural, and absolute. Organizations that properly implemented least privilege, micro-segmentation, data classification, and defense-in-depth were not breached—or if they were, blast radius was minimal.

New defensive capabilities required

Artificial intelligence offense uses orchestration while traditional defense uses isolation.

  1. Correlated detection across multiple surfaces: The defensive answer to AI orchestration is to correlate disparate security signals at machine speed to detect coordinated attack patterns. This requires detection across 5 critical surfaces: Identity surface, endpoint surface, network surface, deception surface, and data access surface.
  2. Automated response at machine speed: Response must match the velocity of AI powered attacks. Fast detection, immediate validation, automated containment, identity-based action gates, and consolidated visibility.
  3. AI agent identities isolation: Traditional identity systems do not distinguish between human users and AI agents. Treat AI agents as a separate identity class with fundamentally different risk profiles and privilege models. Artificial intelligence agents are tools, not users. Tools should have narrow, explicit permissions, —not broad, implicit trust.
  4. Prompt layer monitoring: Organizations deploying AI agents must monitor the cognitive layer. You cannot secure what you cannot see. If AI agents operate in a cognitive black box, you cannot detect when they are being manipulated.
  5. Optimized deception for AI adversaries: Artificial intelligence performs systematic, comprehensive enumeration without human intuition to avoid traps. Make the attacker's enumeration work against them. Organizations should deploy honeytokens in configuration files, decoy infrastructure at scale, and behavioral canaries that trigger actions legitimate users never take. Ensure deception exists at every layer.

These are not theoretical capabilities; they are operational necessities. GTG-1002 occurred in September 2025.4 Organizations without these capabilities are currently vulnerable to AI-orchestrated attacks. The question is not whether to implement them but how fast deployment can occur before the next campaign.

Implications for leadership

For Chief Information Security Officers (CISOs) and security executives, the Centaur paradigm demands a fundamental reconceptualization of security operations.

  • Start optimizing for human-AI team effectiveness. The metric is not alerts processed per analyst, but threats contained per unit time.
  • The quality of dashboards, playbooks, escalation workflows, and feedback mechanisms determines competitive outcomes more than raw AI capability or analyst expertise.
  • Analysts should not be monitoring alerts—AI should. Analysts should be training AI, designing processes, and making strategic decisions that require human judgment.
  • Develop metrics for human-AI integration effectiveness: override rates, feedback loop latency, escalation accuracy, and containment velocity.

The best security teams of the future will not be the ones with the most sophisticated AI or the most experienced analysts. They will be the ones with the best process for combining human judgment and machine capability. To mitigate some of the base line pressures CISOs face, there are some critical questions to consider to better prepare for future security upsets.

6 critical questions for CISOs

  1. Can we detect when AI agents are weaponized against our infrastructure?
  2. Are MTTD, MTTV, and MTTR measured in seconds rather than hours?
  3. Do detection systems correlate signals across all surfaces in real-time?
  4. Are human and AI privileges fully isolated with distinct policy enforcement?
  5. Have we tested defenses against AI powered attack patterns?
  6. Have we quantified the business impact of a machine-speed breach to our critical assets?

Actions for CEOs and boards to address AI-driven cyber threats

  • Treat AI-driven cyber threats as a business model shift, not a product feature. Prioritize identity, customer data, and operational technology.
  • Demand "Centaur-ready"5 security roadmaps with workflow redesign: Playbooks, dashboards, and human-AI teaming—not just more tools.
  • Set explicit risk tolerances for automated action before the next campaign. Define which actions AI systems can take without human approval.
  • Update merger and acquisition (M&A) due diligence to demand evidence for "AI-powered security" claims on tempo, integration, and demonstrated failure handling.

The path forward

Organizations that recognize this paradigm shift and act decisively will achieve resilience. The path requires:

  • Implementing fundamentals at machine speed: Least privilege, micro segmentation, data classification, and defense-in-depth executed with AI-enforced automation.
  • Deploying AI-era capabilities: Correlated detection, autonomous response, AI identity isolation, prompt monitoring, and deception surfaces.
  • Architecting defensive Centaur systems: Human strategic judgment combined with AI tactical execution through optimized processes.

This is not a call for panic but for strategic clarity. The organizations that recognize the paradigm shift and act decisively will achieve resilience. Those that don’t will find themselves outmatched not by superior opponents but by superior human-machine integration. Artificial intelligence did not rewrite the rules of cybersecurity—it accelerated the consequences of ignoring them.

Future attack prevention through the human-AI approach

Critical thinking and basic digital fundamentals are still a vital necessity for defending against cyber-attacks. However, these actions now must be performed and implemented at machine speed. To truly make a difference, a team must bring the creativity and intuition necessary to anticipate and counter adversaries who have already embraced advanced technologies by mastering a human-AI approach.

Within 24 months, leading organizations will operate Defensive Centaur SOCs where AI handles detection and containment at machine speed while humans focus on strategy, threat hunting, and process optimization.

CAI cybersecurity management services are a trusted way to focus on prevention, preparation, and recovery from cybersecurity incidents and AI powered attacks.  Our guidance can help you develop a custom incident response plan (IRP) that will fit your needs.

To learn more about how CAI can help your organization, fill out the form below.


Endnotes

  1. “Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign.” Anthropic, November 2025. https://www.anthropic.com/news/disrupting-AI-espionage.
  2. Zero-Day Exploitation Trends 2024. GTIG Annual Report.Google Threat Intelligence Group. (2025). https://cloud.google.com/blog/topics/threat-intelligence/2024-zero-day-trends#:~:text=Executive%20Summary,as%20security%20software%20and%20appliances.
  3. Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. Public Affairs. (Chapter 11: "The Centaur") Kasparov, G. (2017).
  4. “Coverage of GTG-1002 AI-Orchestrated Cyber Campaign.” Wall Street Journal.(November 23 2025) https://www.wsj.com/opinion/the-first-large-scale-cyberattack-by-ai-4a1e1a30?reflink=desktopwebshare_permalink.
  5. “Cyber Centaurs: A New Weapon in the War Against Cyber Attacks. Collaborative Intelligence Future Science Platform.” CSIRO. (2022 December). https://www.csiro.au/en/news/All/Articles/2022/December/collaborative-intelligence-for-cybersecurity.

Let's talk!

Interested in learning more? We'd love to connect and discuss the impact CAI could have on your organization.

All fields marked with * are required.

Please correct all errors below.
Please agree to our terms and conditions to continue.

For information about our collection and use of your personal information, our privacy and security practices and your data protection rights, please see our privacy policy and corresponding cookie policy.