top of page
Writer's pictureESET Expert

The grand theft of Jake Moore’s voice: The concept of a virtual kidnap


With powerful AI, it doesn’t take much to fake a person virtually, and while there are some limitations, voice-cloning can have some dangerous consequences.


Late one night, while mindlessly scrolling through YouTube, I stumbled upon a video that shed light on a disturbing scam utilizing voice AI platforms. It revealed the potential abuse of this technology in a practice known as virtual kidnapping. This article explores the concept behind virtual kidnappings, the methods employed, and the implications of such a scam.


Understanding virtual kidnapping


Virtual kidnapping is a scam that capitalizes on the fear and panic that arises when someone believes their loved one has been kidnapped. Rather than physically abducting the victim, the scammer aims to extort money or gain some advantage by creating a convincing illusion of kidnapping.


Traditional low-tech method


One of the more traditional approaches to virtual kidnapping involves spoofing the victim’s phone number. The scammer would call a member of the victim’s family or one of the victim’s friends, creating a chaotic atmosphere with background noise to make it seem like the victim is in immediate danger. The scammer would then demand a ransom for the victim’s safe return.


To enhance the credibility of the scam, perpetrators often utilize open-source intelligence (OSINT) to gather information about the victim and their associates. This information helps scammers make the ruse more plausible, such as targeting individuals who are known to be traveling or away from home by monitoring their social media accounts.


High-tech voice cloning


A more advanced and refined version of virtual kidnapping involves obtaining samples of the victim’s voice and using AI platforms to create a clone of it. The scammer can then call the victim’s family or friends, impersonating the victim and making alarming demands.


Feasibility of voice cloning


To demonstrate the feasibility of voice cloning, I decided to experiment with free AI-enabled video and audio editing software. By recording snippets of Jake Moore’s well-known voice — Jake is ESET’s Global Security Advisor — I attempted to create a convincing voice clone.


Using the software, I recorded Jake’s voice from various videos available online. The tool generated an audio file and transcript, which I later submitted to the AI-enabled voice cloning service. Although skeptical about the success of the experiment, I received an email notification within 24 hours stating that the voice clone was ready for use.


Here are the results:


Comentarios


bottom of page