The Growing Threat Of Deepfake Pornography: How To Protect Yourself

"All we have to have is just a human form to be a victim." This statement from attorney Carrie Goldberg, who specialises in online abuse and sex crimes, captures the heightened risks posed by deepfake pornography in the era of artificial intelligence

The Growing Threat Of Deepfake Pornography: How To Protect Yourself

Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -

Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -

“All we have to have is just a human form to be a victim.” This statement from attorney Carrie Goldberg, who specialises in online abuse and sex crimes, captures the heightened risks posed by deepfake pornography in the era of artificial intelligence.

The alarming rise of AI-generated deepfake pornography poses a massive threat to anyone, whether or not they've shared explicit images online. From high-profile individuals to everyday people, including minors, the psychological toll on victims is immense.

The technology behind deepfakes

Unlike revenge porn, which involves the non-consensual sharing of actual images, deepfake technology allows perpetrators to create entirely fabricated content by superimposing someone's face onto explicit photos or manipulating existing images to appear compromising. Even those who have never taken private photos can fall prey to this technology.

According to CNN, high-profile cases in the past have included celebrities like Taylor Swift and Rep. Alexandria Ocasio-Cortez. But young individuals are also finding themselves targeted.

Protect yourself: Preserve evidence

For those who discover their image has been weaponised this way, the immediate instinct is often to try to get it removed. But Goldberg stresses the importance of first preserving evidence by taking screenshots. “The knee-jerk reaction is to get this off the internet as soon as possible. But if you want to be able to have the option of reporting it criminally, you need the evidence,” Goldberg was quoted as saying by CNN.

After documenting the content, victims can use tools provided by tech companies such as Google, Meta and Snapchat to request the removal of explicit images. Organisations like StopNCII.org and Take It Down also assist in facilitating the removal of harmful content across multiple platforms.

Legal progress

The fight against deepfake pornography has drawn rare bipartisan attention. In August 2024, US senators called on major tech companies like X (formerly Twitter) and Discord to participate in programmes aimed at curbing nonconsensual explicit content. A hearing on Capitol Hill featured testimonies from teenagers and parents affected by AI-generated pornography. Following this, a bill was introduced in the US to criminalise the publication of deepfake pornography. The proposed legislation would also require social media platforms to remove such content upon notification from victims.

Goldberg emphasises that while victims can take steps to respond, the onus is also on society to act responsibly. “My proactive advice is really to the would-be offenders which is just, like, don't be a total scum of the earth and try to steal a person's image and use it for humiliation. There's not much that victims can do to prevent this. We can never be fully safe in a digital society, but it's kind of up to one another to not be total a**holes,” Goldberg told CNN.