303.666.4133

When Altman warns of a world-shaking cyberattack, it’s not hype—it’s a signal. The capability curve is outrunning preparedness, and the gap is widening fast.

By Futurist Thomas Frey

Sam Altman doesn’t rattle easily.

The man has spent years at the center of the most consequential technological development in human history, fielding questions about existential risk with the calm of someone who has thought about it longer and harder than most of his critics. So when he sits down for an Axios interview in early April 2026 and says that a “world-shaking cyberattack” this year is “totally possible,” it’s worth putting down whatever you’re doing and paying attention.

This isn’t hype. It isn’t positioning. It’s a warning from someone who sees the capability curve up close — and who understands that the gap between what these systems can do and what the world is prepared for is widening faster than most people realize.

What Changed, and When

For most of the history of cybersecurity, large-scale attacks required one of two things: a nation-state with the resources to field an elite hacking team, or a criminal organization with years of accumulated expertise and operational infrastructure. Both existed. Both caused significant damage. But they were constrained by the fundamental bottleneck of human skill — finding the right vulnerabilities, writing the right exploit code, coordinating the right campaign required people who had spent years developing rare capabilities.

AI has just removed that bottleneck.

What once required an elite team can now be automated or AI-assisted: vulnerability discovery, exploit generation, reconnaissance, highly personalized phishing in any language, malware that iterates to evade detection, and full attack chains that connect multiple exploits into a coordinated campaign. According to Red Canary, adversaries are already using large language models for 80 to 90 percent of tactical operations in espionage campaigns. IBM reported a 44 percent spike in public-facing application exploits in 2026, driven in significant part by AI-assisted attacks. Trend Micro has called this year “the AI-fication of cyberthreats.”

This is not a future threat. It is the current situation, and it is accelerating.

The Anthropic Model Nobody Gets to Use

The detail that sharpens all of this from interesting to genuinely alarming came from Anthropic just days ago.

The company has developed a frontier AI model — internally designated Claude Mythos Preview — that can autonomously identify and exploit thousands of high-severity vulnerabilities across every major operating system, every major web browser, and key enterprise software systems. Including zero-days: previously unknown vulnerabilities that no patch exists for, that defenders have no warning about, that an attacker armed with this capability could use before anyone knows they’re there.

Anthropic is not releasing this model publicly. They know exactly what it represents. Instead, they’re sharing limited access with cybersecurity firms through a program called Project Glasswing — a race against time to use the model’s offensive capability defensively, patching the vulnerabilities it finds before a bad actor with similar capability finds them independently.

Read that again. The AI company that built the model decided the responsible thing to do was not release it, and is instead running a controlled program to use its attack capability for defense. That’s a remarkable level of institutional seriousness about what this technology can do. It’s also a signal about where the capability frontier actually sits right now — not where people imagine it will be in five years, but where it is today.

AI removes limits on cyberattacks—scale, speed, and reach explode. Against aging, vulnerable infrastructure, the risk isn’t theoretical anymore. It’s already within reach.

The Scale Problem

Here’s what makes this different from every previous wave of cybersecurity concern.

Past attacks, even sophisticated ones, were constrained by human bandwidth. A team of hackers, however skilled, could only run so many campaigns simultaneously. They had to choose targets, allocate resources, manage operations. The attack surface they could cover at any given time was finite.

AI removes that constraint. A sufficiently capable model can scan massive codebases simultaneously, run parallel campaigns against multiple targets, generate exploit variants faster than detection systems can update their signatures, and do all of this continuously without fatigue. The attack surface that a nation-state or well-resourced criminal organization can cover with AI assistance is orders of magnitude larger than what was possible before.

Altman’s specific concern — a coordinated disruption of critical infrastructure, finance, or supply chains — is the scenario that keeps defense experts up at night. Not because it requires some theoretical future capability, but because the capability to attempt it exists right now, and the systems it would target were largely not designed to withstand this kind of assault.

Defense expert John Arquilla, responding to Altman’s warning, called the risks “certainly real” and pointed to something that doesn’t get enough attention: our baseline cybersecurity is already poor. Most of the infrastructure that runs critical systems — power grids, water treatment, financial networks, healthcare systems — runs on software that is old, under-maintained, and riddled with vulnerabilities that haven’t been patched because the organizations running these systems don’t have the resources or the urgency to patch them. Add AI-assisted offensive capability to that landscape and the arithmetic gets uncomfortable very quickly.

The Arms Race Is Already On

The one genuinely encouraging part of this picture is that defenders are using AI too.

Anomaly detection that would have taken human analysts days to surface is now happening in near real time. Automated patching systems are closing vulnerabilities faster than before. The same capability that makes offensive AI powerful also makes defensive AI more capable — scanning environments for weaknesses, identifying unusual patterns, responding to incidents faster than any human team could.

But here’s the honest assessment: right now, the offense has the advantage. Attacking is inherently easier than defending. An attacker needs to find one way in; a defender needs to close every way in. AI amplifies that asymmetry. The attacker’s AI is scanning your entire surface looking for one opening. Your defensive AI is trying to monitor the entire surface at once. In a resource-constrained environment — which most organizations are — offense wins more often.

That gap will close. The tools are improving on both sides. But the window we’re in right now, before defensive AI catches up to offensive AI at scale, is the window Altman is worried about. It’s the window Anthropic is running Project Glasswing to address. It’s the window that cybersecurity reports from IBM, Red Canary, PwC, Trend Micro, and Health-ISAC are all, independently, identifying as the highest-risk period in the history of digital infrastructure.

AI threats aren’t inevitable—they’re manageable. Basic security now carries real weight. What was best practice yesterday is mission-critical today. The difference is urgency, not possibility.

What This Actually Means

There is a version of this conversation that slides into fatalism — the technology is too powerful, the surface is too large, the bad actors are too motivated, nothing can be done. That version is wrong, and it’s counterproductive.

What can be done at the individual and organizational level is real and meaningful. Strong multi-factor authentication. Network segmentation that limits the blast radius of any single breach. AI-aware monitoring that looks for the behavioral signatures of AI-assisted attacks, which are different from the signatures of human-operated ones. Vulnerability management programs that treat patching as a continuous function rather than a periodic maintenance task. Tabletop exercises that game out the specific scenarios — coordinated infrastructure attack, supply chain compromise, simultaneous multi-vector campaign — that AI capability makes more plausible.

None of that is new advice. What’s new is the urgency. The same recommendations that were good practice last year are now load-bearing. The organizations that treated basic cybersecurity hygiene as optional or aspirational are carrying real and growing risk.

At the policy level, the conversation about AI governance, vulnerability disclosure, and international norms around AI-enabled offensive capability needs to move faster than it has been. Altman is pushing for exactly this. The Anthropic approach with Project Glasswing — coordinated defensive disclosure before offensive capability spreads — is one model. It won’t be sufficient at scale, but it’s a serious attempt to use the technology responsibly in a moment when responsible use is genuinely difficult to define.

The Bottom Line

Sam Altman said a world-shaking cyberattack is totally possible this year. Anthropic built a model capable of finding vulnerabilities across every major operating system and decided not to release it. IBM, Red Canary, and Trend Micro are all saying the same thing from the outside that the AI labs are saying from the inside.

The window is open. The capability exists. The baseline defenses are insufficient.

That’s not a reason to panic. It’s a reason to move. The organizations and governments that treat this as a high-priority operational reality right now — not a planning exercise, not a future scenario — are the ones that will be in a defensible position when the window either closes or something comes through it.

The threat is real. The preparation is optional.

For now.

Related Reading

IBM X-Force Threat Intelligence Index 2026

IBM Security — The most comprehensive annual analysis of the current threat landscape, including detailed data on the role AI is playing in accelerating attack capability across industries

AI and the Future of Cyber Conflict

RAND Corporation — A rigorous examination of how AI is reshaping the balance between offensive and defensive cyber capability, and what the policy implications are for governments and critical infrastructure operators

The Defender’s Dilemma: Why Cyber Defense Is Structurally Harder Than Offense

Brookings Institution — An honest accounting of why the attack-defense asymmetry in cybersecurity is real, persistent, and now being amplified by AI — and what it would actually take to change it

Futurist Speaker
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.