The Ominous Rise of the AI Voice Scam

January 15, 2024

A few months ago, and without much fanfare, Apple released a rather important new feature on its digital devices. Called ‘Personal Voice’, this software tool asks users to record themselves speaking a variety of sentences. The AI then analyzes the user’s voice, encoding the sounds into an algorithm. After setup is complete, users can type whatever they want into their device — and then listen as the device reads the text back to them in the user’s own voice.

Personal Voice makes for a neat party trick, as the spectacle of phones and computers talking out loud in their owners’ voices can be used for comedic effect. The technology has more serious uses as well, however. Imagine being able to preserve the voice of a loved one who is losing their ability to speak. Imagine being a child with a dying parent, and how meaningful it would be to carry that parent’s voice with you as you grow up, and listen to it saying how proud they are, and how much they love you.

But every tool can also be used in ways it wasn’t designed for. As voice cloning software is added to the cybercriminal’s toolbox, new types of scams have begun to surface. A familiar voice that can bring joy, can also be made to cause panic — and separate people from their money. “Mom, I’ve been arrested,” one such scam goes. “I’ll explain the details later, but for now I just need you to wire some money to help get me out of jail. Use this link …”

Another version: “Dad, I’ve been kidnapped! I’m so scared. They’re gonna kill me if you don’t pay them. Send $10,000 to this bank account — and please hurry!”

Voice cloning technology is already of good enough quality to fool most people, particularly if they are unprepared for this type of scam. And the software will only get better over time; it is already possible to create a convincing fake audio (or even video) clip of a real person, just by uploading a small sample of their voice or face to an AI algorithm. In fact, anyone who has ever uploaded a clip of themselves to YouTube, Facebook, or Instagram, has probably put enough of their voice and face in the public domain for a scammer to take advantage of.

As a result, AI-generated video is becoming a major source of fraud and harassment as well. According to the nonprofit organization Control AI, deepfakes that involve either audio or video content are now a mainstream concern. “Since 2019 the amount of deepfake videos has increased by 550%,” it says. “, nearly half of surveyed US businesses reported experiencing deepfake voice fraud. The first quarter of 2023 saw more cases of deepfake fraud than the entirety of 2022.”

In some countries, regulators are trying to catch up to the scammers — but they face an uphill battle. For the foreseeable future, it is up to ordinary people and businesses to practice digital self-defense. How, then, can people defend themselves against such tactics? Here are some ways:

Healthy habits for identity protection

  1. Guard your valuables. If anyone, even a trusted person, contacts you online to ask for financial help or something similarly sensitive, verify their true identity very carefully before providing the help they are seeking. The remainder of this list will include examples of how to do that.
  2. Establish passwords. Have a code word or phrase known only to you, your family, and your very close inner circle. If you receive an online request from someone who should know the password but doesn’t, then you may be the target of a scam.
  3. Call them back. If you are contacted via phone call from someone close to you asking for money, tell them you’ll call them right back. Then call them at the number you already have in your phone’s address book. If they answer the phone and deny contacting you a moment earlier, then the original caller was likely a scammer.
  4. Lay a trap. Ask a question about a nonexistent friend or family member; if they take the bait, they’re a scammer.
  5. Contact the police. If you’re being openly blackmailed by someone who is threatening to ruin your reputation by publishing an AI-generated video with your likeness, law enforcement may be able to help trace the criminal and bring them to justice.

By following these guidelines, you can protect yourself from cyberthieves who may target you using voice or video cloning. The stakes are high; fake-kidnapping scams alone typically extract $10,000 or more from each of their victims.

Finding our digital voice

Although AI represents a new tool for criminals, it makes ordinary people more capable as well. Apple’s Personal Voice is one example, but there are many others. Some months ago, a song with the voices of Drake and The Weeknd went viral online — even though neither artist actually worked on it. The musician Grimes actively encourages fans to generate and publish new songs using her voice, as long as they split the profits with her afterwards.

Grimes’ decision to accept the reality of this new technology, and adapt to accommodate it, is perhaps the most admirable response to the AI voice cloning phenomenon. The technology is here to stay, and we had all better learn to live with it, while protecting against the dangers it poses, and harnessing its creative potential to branch out in new artistic directions.

Share this article

Subscribe to InnoHub!

Stay updated and inspired

เรานำข้อมูลมาใช้เพื่อการส่งมอบคอนเทนต์และบริการอย่างเหมาะสม เราจะปกป้องความเป็นส่วนตัวของคุณ คุณสามารถอ่านข้อมูลเพิ่มเติมได้ที่ Privacy Policy และคลิกสมัครเพื่อดำเนินการต่อ