Teen Dies After AI Sextortion Scam; Rise in 'Nudify' App Blackmail
Teen Dies After AI Sextortion Scam; Rise in 'Nudify' App Blackmail
A tragic incident in Kentucky has brought the dangers of AI-generated sextortion into sharp focus, as a 16-year-old boy’s suicide was linked to a blackmail scheme involving an AI-generated nude image. This case is part of a growing global crisis, with minors increasingly targeted by malicious actors using AI “nudify” apps to create and distribute non-consensual explicit content.
A Teen's Tragic Death Sparks Concern
Elijah Heacock, a 16-year-old from Kentucky, died by suicide this year after receiving threatening texts demanding $3,000 to prevent the distribution of an AI-generated nude image of himself. His parents later discovered the messages, revealing the extent of the digital blackmail that had ensnared their son.
John Burnett, Elijah’s father, spoke out in a CBS News interview, describing the perpetrators as “well organized, well financed, and relentless.” He emphasized that the threat does not rely on real images, as AI can generate convincing content that is just as harmful as actual photos.
The Rise of AI 'Nudify' Apps
The proliferation of AI “nudify” apps—tools designed to digitally remove clothing or generate explicit images—has exacerbated the problem. Originally developed for entertainment, these apps are now being weaponized against children, with predators using them to create fake intimate content for blackmail purposes.
The FBI has reported a “horrific increase” in sextortion cases targeting minors in the U.S., with victims typically being teenage males between the ages of 14 and 17. The agency has warned that the threat has led to an “alarming number of suicides,” highlighting the urgent need for action.
A Looming Global Crisis
A survey by Thorn, a nonprofit focused on preventing online child exploitation, found that 6% of American teens have been direct victims of deepfake nudes. These AI-generated images are increasingly being used not just for blackmail, but also for financial extortion and psychological harm.
The Internet Watch Foundation (IWF) reported that perpetrators no longer need to source real images from children. Generative AI can produce images that are “convincing enough to be harmful,” even as damaging as real photos in some cases. In fact, some predatory guides explicitly encourage the use of nudifying tools to target minors.
The Profitability of the Nudify Industry
The nudify app industry is a lucrative business. An analysis of 85 websites selling such services found they may be collectively worth up to $36 million annually. Some sites reportedly generated between $2.6 million and $18.4 million in the six months leading up to May.
Despite efforts by platforms and regulators to shut down these sites, many continue to operate, relying on major tech infrastructure from companies like Google, Amazon, and Cloudflare. This resilience has led some researchers to describe the fight against AI nudifiers as a “game of whack-a-mole.”
Global Responses and Ongoing Challenges
In response to the crisis, several countries have taken legislative steps. The UK made the creation of sexually explicit deepfakes a criminal offense, with penalties of up to two years in prison. In the U.S., President Donald Trump signed the bipartisan “Take It Down Act,” which criminalizes non-consensual publication of intimate images and mandates their removal from online platforms.
Meta recently filed a lawsuit against a Hong Kong-based company behind the nudify app Crush AI, accusing it of repeatedly circumventing the social media giant’s rules to post ads on its platforms. However, experts argue that these measures are not sufficient to fully address the scale and persistence of AI-generated content abuse.
A Call for Greater Action
As AI nudify tools continue to evolve, so too must the global response. Researchers and advocates are calling for stronger international cooperation, more robust regulation, and increased public awareness to protect children and young people from the growing threat of AI-generated abuse.
