Resilience Is Not Enough

“Attackers want us confused, reactive, and alone. And too often, our systems assume the same: that you’ll solve this by yourself.”

What AI-generated scams are teaching me about security and care

A friend had a stroke, then another. His memory never quite returned. He’d latch onto an idea, forget he’d already acted on it, and do it again. The order of things, what came first, what came next, had slipped.

But he still had his phone. His credit card. His independence.

His doctor hesitated when we asked whether it made sense to take those away. He had always been proud. Independent. Resilient. There was still a chance he might recover.

Taking those things would have felt like giving up. Like telling him he was no longer who he had been. That kind of decision comes last.

Someone found him. Pretended to be a friend. Then a friend in need.

He started sending gift cards. By the time we understood what was happening, it had already happened again. He sent more, sometimes without being asked. He forgot who had asked. Forgot he had already paid. What he remembered was that someone needed help.

That was when I stopped thinking this could be solved with education. Some people don’t need awareness. They need protection.

I. The Click

Phishing is supposed to be obvious. A fake invoice. A weird link. A message that doesn’t sound quite right. We imagine it as something you’d catch if you were just paying attention.

Phishing

noun

1. A form of social engineering where someone pretends to be someone you trust to get something you wouldn’t otherwise give.

2. The reason your inbox feels less like a communication tool and more like a psychological obstacle course.

So we built defenses around that version of the threat. Spam filters. Link scanners. Awareness posters in office kitchens.Hover before you click. Report suspicious activity. Take the training again next year.

There’s a growing body of evidence that questions whether the approach works at all. People get tired. Some tune it out. Others feel blamed. The simulations aren’t always realistic.

And we’re still acting like the user is the problem. We teach them to spot traps, but not to question why the traps keep working.We focus on click rates, not system design. We ask for vigilance, but give them inboxes full of noise. We put the burden of cybersecurity on individuals, then wonder why they fail.

Because phishing was never just about the message. It was about the moment. The moment you’re rushing. The moment you’re tired. The moment you’re scared. Or trying to help.

My friend wasn’t careless. He wasn’t reckless. He wanted to help, but he couldn’t always tell what was real. That’s what made him vulnerable. And now the messages don’t even sound wrong

They sound right.

II. Enter AI

Spotting phishing messages used to be easier. The spelling was off. The formatting clumsy. The tone didn’t sound quite human.

Sometimes there was a Nigerian prince involved, and for some reason, he always needed help wiring money. Which was odd, since Nigeria hasn’t had a monarchy since 1963.

Once someone pointed that out, the whole thing fell apart. We laughed. We warned each other.

That was the defense: not better filters, but shared sense. Knowing what to look for, and who to talk to.

But those messages don’t show up as often anymore. They’ve been replaced by something harder to spot. Today, phishing emails don’t just look professional. They sound familiar. Like your colleague. Like your boss. Like you.

Large language models can mimic tone and context. Voice cloning tools can leave a voicemail that sounds exactly like someone you trust. Some attacks even arrive mid-conversation, replying to an email thread you’re already part of, when your guard is down.

Behind the scenes, machine learning models track what you click, when you respond, how you write, and use that to shape the message so it slips through. This isn’t brute force. It’s precision-crafted deception.

A Slack message from IT. An email from your bank. A video call from your CEO. Except it isn’t. It doesn’t need to fool everyone. Just one person. Just once.

And the stakes are higher now. Clicking the link used to be the end of the scam. Now it’s just the beginning. A foot in the door. Then data exfiltration. Then system encryption. Then a ransom note.

III. Beyond the Message

Phishing has changed. Our defences are starting to catch up, not just the filters, but the ways we make sense of these campaigns.

One of the models I’ve been thinking about lately comes from disinformation research. It’s called RICHDATA. It wasn’t built for phishing detection, but the fit is surprisingly good. It teaches you to watch how influence campaigns evolve, not as isolated attacks, but as adaptive systems.

Look closely, and you’ll see it: how scammers repeat their lures, how they borrow tactics from one con to test in another, how they shift just enough to stay one step ahead of the detection model. That’s what RICHDATA helps you see: not just the message, but the method.

And it’s part of a broader shift. For years, researchers focused on the obvious signs: the weird link, the mismatched address, the awkward phrasing. 

Now they’re asking a harder question: Why does this work?

New detection models are borrowing from persuasion science. They score messages for urgency, fear, emotional pull. They don’t just look for broken grammar. They look for influence tactics: the fake emergency, the plea for help, the abuse of authority.

It’s not just pattern-matching anymore. It’s modeling the mind games.

IV. The Shift

If these attacks work by preying on our psychology, then maybe our best defense isn’t just smarter technology. Maybe it’s smarter care.

We talk about teaching resilience to ambiguity, the skill of staying grounded when you’re being bombarded with so much contradictory information, you’re not sure what’s real. We also talk about teaching critical thinking, the habit of asking better questions before jumping to answers. We simulate phishing messages. We train pattern recognition. We encourage people to pause.

Sometimes it helps.

But then there’s the serial clicker, the person who opens suspicious links with the confidence of a golden retriever and the track record of a lab experiment. The person who is the reason your CISO now meditates.

We laugh. We nudge. We retrain. And eventually ask in exasperation: what else can we do?

But what if the person clicking… really can’t stop? What if they’re not careless, but cognitively vulnerable? What if they’re tired, grieving, distracted, or simply trying to help?

We often act like the user is the last line of defense, the most agile and responsive firewall, at once the greatest asset and the greatest liability. But that isolation is part of the problem.

Attackers want us confused, reactive, and alone. And too often, our systems assume the same: that you’ll solve this by yourself.

But what if we slowed down? What if we asked someone first? What if the instinct wasn’t just hover before you click, but talk before you act?

That used to be our best defense: not better filters, but shared sense. Knowing what to look for. Knowing who to talk to. We can build systems that make room for that again.

That was the point of sharing my friend’s story. Because we need to share these moments, without shame, so others can learn from them.  What he needed wasn’t awareness. He needed a barrier. A stop. A community response, the kind of quiet, local decision that says: we look out for each other here.

And in his town, the corner store quietly became that response. The owner who sold him all those gift cards just quietly stopped selling them. No announcement. No blame. Just a decision: Not here.

That, too, is a form of security. Not reactive. Not punitive. Protective by design.

Subscribe to Cadence and Consequence

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe