Saturday, February 14, 2026

AI Scams: Why the Old Rules Don't Work Anymore (And What Does)

Years ago, when I was in college, I was at my dad's house when a piece of mail arrived announcing he'd won the lottery. I read it carefully. I was convinced. I actually called him at work and told him to come home because he'd won. I'm glad this memory isn't a painful one. I don't think he was mad, just amused that a college kid could fall for something so transparent.

But the scams we're facing today aren't transparent at all. They're not the ones we were trained to recognize (the bad grammar, the foreign princes, the stranded traveler, and assorted sketchy emails). This is a new generation of fraud, powered by artificial intelligence, that does something that was impossible even a couple of years ago: it can personally impersonate people you know and love.

And that changes everything.

The Call That Changed the Conversation

In January 2023, Jennifer DiStefano was sitting in her car outside a dance studio in Scottsdale, Arizona. Her 15-year-old daughter Brianna was on a ski trip with her dad. When an unknown number rang, Jennifer picked up--something she might not normally do, but with her daughter traveling, she answered.

She heard her daughter's voice: "Mom, I messed up."

The voice was panicked. Jennifer knew it was her daughter. Then a man came on the line, speaking roughly, and demanded a million dollars or he would drug and rape her child.

The panic Jennifer felt in that moment is something any parent can imagine. Fortunately, through a chain of quick thinking by others around her, someone managed to call the ski resort and reach Brianna, who was perfectly safe and confused about what was happening. But that scenario--the terror of hearing your child's cloned voice begging for help--is exactly the kind of attack that's now industrially scalable.

Jennifer testified before the Senate. Her story made national news. And it resonated so deeply because we all know: if that had been us, we're not sure we could have thought clearly enough to verify before acting.

When Even Video Can't Be Trusted

Jennifer's story involves voice cloning, but it doesn't stop there. In 2024, a finance employee at Arup (the global engineering firm behind the Sydney Opera House) received a message that appeared to be from the company's CFO in London. The message described a confidential deal requiring urgent fund transfers.

The employee was initially suspicious. It looked like phishing. But then he was pulled into a video call where he saw the CFO and several other executives on screen, looking normal, chatting naturally, discussing the deal. So he complied, wiring fifteen separate payments totaling $25 million to five bank accounts in Hong Kong.

It was only after the transfers that he checked with head office and discovered that none of those people had actually been on the call. Every face on that screen was a deepfake.

Arup's public statement afterward was telling: "Our systems weren't hacked. Human trust was hacked."

That's the core of what we need to understand.

This Isn't About Being Careless

The numbers are staggering. Reported losses from these scams reached $16.6 billion in 2024, with estimates projecting $40 billion annually by 2027. It's believed that one in four people has either been scammed, experienced an attempted scam, or knows someone who has. One in four spam calls now uses an AI-generated voice rather than a human one. And grandparent scams, where someone impersonates a grandchild in distress, are among the fastest-growing categories.

But here's what matters most: this isn't about victims being careless or uninformed. This is about technology that exploits how our brains are wired.

Our brains evolved over hundreds of thousands of years for small tribal living, where trusting familiar voices and faces was essential for survival. We're fundamentally wired to believe what we hear from people we recognize. That's not a bug, it's a feature. Trust within a group kept our ancestors alive.

The problem is that these evolved features weren't designed for an era when a three-second audio clip can be used to clone your voice, or when real-time deepfake video can put a convincing replica of your boss on a Zoom call.

When the emotional, fear-driven part of the brain gets hijacked, it floods the body with stress hormones and the rational mind shuts down. This is by design. It's what enabled our ancestors to react instantly to threats. But scammers know this. They deliberately create panic to prevent you from thinking clearly. They exploit our authority bias (we defer to bosses and officials), our protective instincts (especially toward children and grandchildren), and our social conditioning to comply with urgent requests.

These are features of human psychology that worked beautifully for hundreds of thousands of years. They just weren't designed for this level of impersonation.

From Detection to Verification

Here's the shift in thinking that underlies everything: we have to move from detection to verification.

The old approach was about spotting fakes--about looking for bad grammar, generic greetings, suspicious signs. The new reality is that those tells are gone. The greeting will use your name. The voice will sound exactly like your child. The email will match your boss's communication style across multiple exchanges.

So instead of trying to spot what's fake, we need to confirm what's real through channels that scammers can't control. And because these scams work by hijacking our ability to think clearly, our defenses can't rely on making good decisions under pressure. We need them to be automatic.

Four Protocols That Actually Work

These defenses aren't technology-based. You won't need to run video through an AI detection program. These are simple, human protocols based on understanding how your brain works and what to do when it's been compromised.

The Safe Word Protocol. This is the single most important defense. Establish a secret verification phrase known only to your immediate family. This can be from a shared memory, an inside joke, a random funny phrase you'll all remember. "Dancing pink elephant." Whatever it is, it should never appear on social media, never get recorded anywhere, and be impossible for an outsider to guess. If someone calls claiming to be your child or grandchild, ask for the safe word. If they can't provide it, you know it's not them. 

The Callback Protocol. When you receive a suspicious call, hang up and call back on a verified number, like your daughter's cell phone, your husband's number, or your boss's direct line. This is hard because scammers create enormous time pressure, but it's devastatingly effective. They can only control the channel they've initiated. They can't intercept your outbound call to a known number.

"Out-of-Band" Verification. Any request involving money gets confirmed through a separate, independent channel. If your boss emails asking you to wire funds, don't reply to the email, but call him or her directly. If a grandchild calls saying they need money, hang up and call their parents. This is what the financial community calls the "four eyes principle:" multiple independent checks on any transaction. No single person should authorize a large payment based solely on one communication. You seen this when you go to the bank, for good reason.

The Two-Minute Rule. Any urgent request involving money or sensitive information gets two minutes of pause before you comply. This sounds almost impossibly short, but it's enough. Two minutes of deliberate breathing and thinking allows the prefrontal cortex to come back online, and you start asking the questions that unravel the scam. If something can't wait two minutes, that itself is a massive red flag.

Teaching Others Without Creating Shame

If you're an educator, librarian, or someone who works with the public, there's a critical dimension to how you share this information: shame is the enemy of protection.

Most adults, especially older adults, have absorbed a narrative that scam victims are foolish or careless. This shame prevents people from learning, from reporting, and from seeking help. Estimates suggest only one in ten scams is actually reported.

When you teach this material, lead with the neuroscience. Explain that these scams exploit evolved brain mechanisms that no one can simply override through willpower. Tell Jennifer DiStefano's story. Help people understand that falling victim doesn't mean being stupid; it means being human.

For seniors, this is especially important. The grandparent-grandchild relationship is uniquely vulnerable because there's often less daily communication combined with an enormous emotional desire to help. Make sure older adults in your life have established safe words with their children and grandchildren, understand the callback protocol, and have these four steps written down somewhere accessible.

The appropriate emotional response to being scammed is anger at the criminals, not shame at being targeted.

When the Worst Happens

Despite our best efforts, some people will still fall victim. If it happens, the first two hours are the golden window.

Act immediately: contact financial institutions to freeze funds, change passwords starting with email, and document everything while details are fresh. File reports with the FBI's Internet Crime Complaint Center (ic3.gov) and the FTC (reportfraud.ftc.gov). For significant amounts, file a local police report as well.

Be honest about recovery expectations. Wire transfer recovery rates are approximately 8 to 12 percent. For cryptocurrency, it's closer to two percent. These numbers are painful, but people need realistic expectations so they can focus energy on emotional healing rather than holding out false hope.

And if someone comes to you after being victimized--a patron, a student, a family member--lead with compassion. This wasn't their fault. Emotional recovery and financial recovery are separate processes, and both matter.

Your 30-Minute Protection Protocol

Everything covered here comes down to a simple commitment you can make today.

The old rules were about detection. The new rules are about verification. AI can clone voices and faces, but it can't access your safe word. Urgency is always a weapon; verification is always the defense. And the protocols that protect you are the ones that work even when you can't think clearly.

Before you go to bed tonight, establish a safe word with your family. One phone call or one group text is all it takes to start. Then share what you've learned. Every person you reach is one more person protected from what has become the fastest-growing form of fraud in history.

The rules have changed. Now you have the new ones.

Tuesday, February 10, 2026

WHAT YOU NEED TO KNOW ABOUT AI: The Library 2.0 2026 "AI and Libraries" Overview on February 17th (and recording information)

What You Need to Know About AI
The Library 2.0 2026 "AI and Libraries" Overview: Where We Are Now

A 1-hour Free Webinar with Crystal Trice

OVERVIEW:

Artificial intelligence is changing faster than most of us can keep up with. If you work in libraries, you've probably wondered what's real and what's hype, or what any of this means for the work you care about.

This free one-hour webinar offers a calm, non-technical look at where AI stands right now, including emerging trends in how it’s being used, how work is beginning to shift, and the real questions showing up in libraries.

We'll also bring your colleagues' voices into the conversation. When you register, you'll have a chance to respond to a short survey, and we'll share what people are curious about, concerned about, and hoping to understand better.

In this free webinar, you will:

  • Understand how AI has moved from experimental to practical, in plain language
  • See current trends in how libraries and other organizations are using AI
  • Hear what your peers are thinking, based on anonymous survey responses
  • Identify practical questions worth discussing with your team or organization
  • Leave with a clearer sense of what to pay attention to next, without overwhelm

This session is open to library staff, leaders, trustees, partners, and anyone curious about how AI is shaping library work and services. No technical background needed.

This is a live, online 1-hour event. Attendance is not required. The recording and the slide deck will be released immediately to registrants for unlimited post-event viewing.

DATE:

  • Tuesday, February 17th, 2026, from 12:00 - 1:00 PM US - Eastern Time

COST:

  • Free

TO REGISTER:

  • Click HERE to register and fill out an optional short survey (we hope you will!). You will receive an email within a day with information on how to attend the webinar live and how you can access the permanent webinar recording.

NOTE: Please check your spam folder if you don't receive your confirmation email within a day.

Crystal Trice

CRYSTAL TRICE

With over two decades of experience in libraries and education, Crystal Trice is passionate about helping people work together more effectively in transformative, but practical ways. As founder of Scissors & Glue, LLC, Crystal partners with libraries and schools to bring positive changes through interactive training and hands-on workshops. She is a Certified Scrum Master and has completed a Masters Degree in Library & Information Science, and a Bachelor’s Degree in Elementary Education and Psychology. She is a frequent national presenter on topics ranging from project management to conflict resolution to artificial intelligence. She currently resides near Portland, Oregon, with her extraordinary husband, fuzzy cows, goofy geese, and noisy chickens. Crystal enjoys fine-tip Sharpies, multi-colored Flair pens, blue painters tape, and as many sticky notes as she can get her hands on.

OTHER UPCOMING EVENTS:

February 12, 2026

February 12 Event

February 13, 2026

February 13 Event

February 19, 2026

February 19 Event

February 20, 2026

February 20 Event

February 26, 2026

February 26 Event

February 27, 2026

February 27 Event

Starts March 4, 2026

March 4 Event