Spencer Municipal Utilities' Website Compass

16 WebsiteCompass Easy to Make, Hard to Spot Deepfakes have been around for years, but voice cloning software previously produced robotic, unrealistic voices. With today’s stronger computing power and more refined software, deepfake audio is more convincing. As is the case with many technological advances, criminals are early adopters and taking advantage of the nefarious opportunities provided. By using voice cloning technology, such as ElevenLabs’ AI speech software VoiceLab, all it may take to create a convincing impersonation is a short audio clip of the targeted person’s voice, pulled from a video posted to social media platforms like Facebook and Instagram. The technology uses AI tools that analyze millions of voices from various sources and spot patterns in elemental units of speech (called phonemes). A person simply types in what they want the targeted voice to say, and a deepfake audio can be created. In addition to improvements in the power of voice-cloning technology, two other factors are leading to more deepfakes. First, the technology is increasingly affordable—some software offers basic features for free and charges less than $50 a month for the paid version with advanced features. And second, the tools are easy to use, thanks to the growing number of training videos posted online. Unfortunately, this means almost anyone can create a deepfake audio meant to deceive listeners, opening the floodgate to fraudulent activities. Many Fraudulent Uses The kidnapping example mentioned earlier is just one of many ways deepfake audio is being used. Criminals are also impersonating people including: Deepfakes Can Seem Real Improved voice cloning technology produces believable audio impersonations Early in 2023, a criminal attempted to extort $1 million from an Arizona-based woman whose daughter he claimed to have kidnapped. Over the phone, the distraught mother heard what sounded like her daughter yelling, crying, and frantically pleading for help. It wasn’t her daughter. It was a deepfake audio enabled by artificial intelligence (AI). Beyond the Basics

RkJQdWJsaXNoZXIy MTMzNDE=