Three measures to protect oneself against AI deepfakes

Navigating this new, AI-driven world can feel like walking through a minefield. However, with the right combination of technology, education, and a bit of common sense, we can all sleep a little easier at night. Here are three steps your business should follow for increased cyber security.

When AI becomes digital deceivers

The rain lashed against the window of Jonas Nordmann's office where he sat working late to finish the accounts. Suddenly, he heard the phone ringing. The voice on the other end was familiar; he had worked for Markus for several years. "Hey Jonas, we have a little crisis here with a payment to a subcontractor that was never made. I need you to transfer 2 million kroner to the account I just emailed you information about. It's urgent." Markus sounded agitated, and Jonas sensed the importance in his tone.

Without hesitation, he started entering the transfer details. But as he pressed "send," doubt struck him hard in the chest. Something was wrong. There was a discrepancy in the tone, something he couldn't quite put his finger on.

With a sudden sense of panic, he dialed Markus's direct number, the CEO of the company. After several attempts, Markus finally answered. "Hello, Jonas. How can I help you?" Markus said in his usual voice.

Jonas instantly knew he had been deceived. He had fallen victim to one of the most alarming threats that generative AI technology could pose: Voice cloning.

A Real Threat

This scenario might sound like science fiction, but it's a genuine threat that businesses face in today's digital landscape. Several companies have already been duped. For instance, a director at a Japanese company was tricked into transferring $35 million to scammers in a nearly identical scenario, and a British energy company was defrauded of €220,000.

The phenomenon known as Deepfakes isn't new; it has been known for several years that AI can be used to create fake imitations of well-known individuals where there is plenty of video or audio data available to train an AI clone. Until now, training a Deepfake AI has been relatively expensive and technically challenging, which has limited its widespread threat potential.

However, with the recent advancements in Generative AI technology, this is changing. While applications like ChatGPT for text generation and Midjourney for image generation have made the most headlines, companies like ElevenLabs and Synthesys have developed AI models that have reached a point where they can analyze and mimic human voices with astonishing accuracy, requiring very little training data.

Elevenlabs markets itself by claiming to require as little as a 5-second audio clip to create a credible voice clone. While this claim needs some qualification, the reality we face is that substantial amounts of audio clips of an individual are not necessary to create a copy that can mimic voices with high accuracy, including tone and intonation.

In the example mentioned earlier, our CEO, Markus, might have been on stage during a conference with videos posted online. The fraudster simply needs to feed the audio file into an AI program, and they have everything they need. The AI model has been trained in a way that it understands how different languages work, needing just a bit of additional fine-tuning to add a personality to express everything it has learned.

This technology also holds the potential to drive innovation and create positive changes. AI-driven translation programs, for example, can translate from one language to another so quickly that it seems simultaneous, making it sound like the speaker is speaking an entirely different language than they actually are. But it also opens up new and complex threats.

1. Understand the basic mechanisms

So, what can we do to avoid being deceived? First and foremost, we need to stop treating AI technology as some kind of magical black box. We must understand the basic mechanisms. Have you ever noticed that many of us double-check the door lock before going out, but don't think twice about sending money on a simple phone order from the boss? The future may require us to be better at questioning authority, and yes, that includes Markus or whoever your top executive might be.

2. Implement two-factor authentication

Any other good habits? Well, how about requiring some form of two-factor authentication for urgent or large transfers? It's like a digital handshake, a confirmation that it's really you.

And don't underestimate the power of good old human intuition. If something feels off, it probably is. That's where training comes in. Ensure that your team understands the possibilities new technology brings and what they're up against, so they know what to look for and which protocols to follow when alarm bells ring.

3. Utilize the technology that generates deepfakes to detect deepfakes.

Let's not forget that technology can also be our ally in this. Efforts are underway to develop systems, often based on the same generative AI technology, designed to detect synthetic voice clones. An example is a company called Mobbeel, which has developed what they call Voice Biometrics, capable of comparing whether a voice on the phone matches a genuine voice registered in their systems, similar to a fingerprint. If Markus calls you to transfer a couple of million, perhaps it wouldn't hurt to have an app that can confirm it's actually him?


Navigating this new, AI-driven world can feel like walking through a minefield. But with the right combination of technology, education, and a bit of common sense, we can all sleep a little easier at night. So, the next time the phone rings and someone rushes you to press "send," take a moment. That small pause could save you from a future crisis.