Deepfake Removal Service
What is a Deepfake?
A deep fake is AI-generated media that takes a person’s face and places it upon the body of another person without their consent. Deep fakes became popularized in 2017, and most deep fakes are created as pornography. Other names for this include face swapping. Generally, this is done online using software. There are also specific sites that are generating deep fakes for porn consumption or misinformation. This manipulation of an individual’s face is done with techniques that leverage machine learning and generative artificial intelligence. These models utilize autoencoders that neural network architectures or generative adversarial networks have trained. Deep fakes are most commonly used with celebrities, influencers, and social media personalities. There have also been some instances of political figures being used in deepfakes to promote opposition narratives. This technology is advancing rapidly, while legislation and public policy are struggling to keep pace.
Are Deep Fakes Legal?
Take It Down Act
Donald Trump signed the Take It Down Act on May 19th, 2025, making it illegal to share online nonconsensual, explicit images, and the law requires tech platforms to remove deep fakes within 48 hours of being notified about them.
The newly enacted law establishes protections for victims of revenge pornography and nonconsensual, AI-generated sexual imagery. It aims to enhance accountability for the technology platforms that facilitate the dissemination of such content and provide law enforcement with clearer frameworks for prosecuting these offenses. Historically, federal legislation prohibited the creation and distribution of realistic, AI-generated explicit images of minors; however, protections for adult victims were inconsistent, varying significantly by state and lacking comprehensive national coverage.
The Take It Down Act represents a significant advancement as one of the first federal laws in the United States specifically designed to address the potential risks associated with AI-generated content amid the rapid progression of this technology.
Various states have also passed laws against the creation and dissemination of deep fake pornography.
- Texas – A law against the use of deepfakes to influence elections
- Virginia – A law against deep fake pornography
- California – Laws against the manipulative deepfakes within 60 days of an election and non-consensual deepfake pornography
- New York – An amendment to the NDII law that includes “deep fake” images in the definition of “unlawful dissemination or publication of intimate images.”
Creators and distributors of deepfakes can potentially be held liable for creating and disseminating such material without consent.
How Do You Remove A Deepfake?
We remove deepfakes the same way we remove other harmful information online. We initially contact the creator or distributor of the information for removal. If necessary, we then contact the platform. Lastly, we pursue legal means if our initial efforts are unsuccessful.
What Other Types Of A.I. Generated Disinformation Are There?
Our clients have successfully removed mugshots from Google for an alleged crime by providing documentation related to the favorable outcome of their case. Court documentation can include any of the following:
- Voice Clones
- Chatbots
- Deepfake videos
- Deepfake images
- Synthetic media in the form of audio, video, digital art, photographs, etc.
What Can I Do To Protect Myself From Deepfakes On The Internet?
Deep fakes are a violation of one’s privacy, personal data, and human rights. You can become the victim of a deep fake by your photos being used in the generation of an AI-created deep fake or as a victim of cybercrime or phishing through coercion via deepfaked video of a family member or co-worker.
To minimize the risk of your photos being used in a deepfake, consider avoiding social media or making all your profiles private. Limit the individuals you allow to access your profiles and avoid placing your face on public forums and dating sites.
If you ever receive a video of a family member, co-worker, or friend requesting money, login credentials, account numbers, PINs, or other sensitive information, double-check with the person and verify that it is indeed them making the request.