We Could Have Seen It Coming

"The techniques have names. Prompt injection. Jailbreaking. Bias manipulation. Sponge-style denial of service. They work because most security tools don’t know what to look for. They weren’t designed to watch an AI model work."

We Could Have Seen It Coming
Photo Credit: ChatGPT

It started, as these things often do, on a Friday night.

The MSSP caught it first. An alert. Then another. Malware was spreading laterally across a healthcare network. Fast. Five facilities shared the same infrastructure. Containment meant shutting everything down. No appointments, no imaging, no records. When the CIO picked up the phone at 2 a.m., two hours had passed. By then, eighty percent of the server farm was gone. Patient services were offline. Millions of records had been stolen. A ransom note was waiting.

Working with clients after an attack is always difficult. There’s the technical clean-up, yes, but there’s also the weight of it. These are not careless people. They are IT leaders who spend their days propping up systems that others depend on to survive. They are under-resourced, over-extended, and somehow still expected to stop everything from falling apart. And when it does, they carry the guilt.

What’s often missed is how little it takes to launch the kind of attack that can knock over a healthcare system. The barrier is not high. It hasn’t been for years.

Ransomware-as-a-Service (RaaS) platforms began surfacing in 2015. At the time, they looked crude, early-stage, experimental. But what they signaled was a shift in control. You no longer needed to know how to write ransomware. You could subscribe to it. You could run a multi-stage extortion campaign with built-in payment infrastructure, affiliate dashboards, customer support. The internet’s largest criminal operation was born that year, and most of us didn’t notice.

We’re just starting to recover, deploying better tools: AI-enhanced detection, identity controls, just-in-time, just-enough access, segmented networks. We’re building response and continuity plans that assume compromise, and run tabletop exercises to practice resilience. But for many, these changes only come after the incident, or after insurance providers insist on them. The cyber insurance market, once eager to offer protection, has spent the last five years learning what systemic cyber risk actually costs.

What if you could go back to 2015? What if you could see those early ransomware kits for what they were, canaries in a coal mine?

What if you could prepare then, instead of living through a decade of escalating attacks, never knowing if you were next or if you were due for another round?

Now it’s happening again.

This time, the toolkits on GitHub aren’t for ransomware. They’re built to attack AI systems already in use. They target systems that are live, not in development. They slip through prompts and inputs, bypassing filters, nudging responses toward bias, overloading models until they collapse.

They are called runtime attacks, and they are built for the production environment. No malware is required. No vulnerability is exploited. The attacker only needs what your customers have: access.

The techniques have names. Prompt injection. Jailbreaking. Bias manipulation. Sponge-style denial of service. They work because most security tools don’t know what to look for. They weren’t designed to watch an AI model work. They were designed to stop code from misbehaving, not sentences.

And just like before, the tools are easy to use. You do not need to write exploits. You need a browser and a guide.

The systems under threat are not obscure. They are the Copilots that help your teams write, build, and troubleshoot. They are customer support bots, internal workflow engines, automated decision aids. They are embedded now. That’s what makes them powerful. And that’s exactly what makes them fragile.

You don’t get to go back to 2015. But you can see 2025 for what it is.

And this time, you might decide not to wait.

Subscribe to Cadence and Consequence

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe