Finally, Google is Taking Action Against Non-Consensual Deepfakes

The world needs more individuals like Taylor Swift to bring about change. In January, explicit deepfakes of Taylor Swift went viral on X, sparking public outrage. Nonconsensual explicit deepfakes are among the most harmful effects of AI, and the recent surge in generative AI has exacerbated the problem, with notable cases involving children and female politicians.

Although Swift’s deepfakes were distressing, they significantly raised awareness about the dangers and motivated tech companies and lawmakers to take action. Henry Ajder, an expert in generative AI who has studied deepfakes for nearly a decade, notes that we are at a crucial juncture where the pressure from lawmakers and increased public awareness are forcing tech companies to address the issue.

Recently, Google announced steps to prevent explicit deepfakes from appearing in search results. The company is simplifying the process for victims to request the removal of nonconsensual explicit imagery and will filter explicit results on similar searches to prevent duplicate images from reappearing. Additionally, Google will downrank search results leading to explicit fake content and prioritize high-quality, non-explicit content when searches include someone’s name.

Ajder views Google’s changes positively, stating that they significantly reduce the visibility of nonconsensual pornographic deepfakes, making it much harder for people to access such content.

Earlier this year, I wrote about three strategies to combat nonconsensual explicit deepfakes: regulation, watermarks to detect AI-generated content, and protective shields to make it harder for attackers to use our images. Eight months later, watermarks and protective shields are still experimental, but regulation has made some progress. The UK has banned the creation and distribution of nonconsensual explicit deepfakes, prompting a major site, Mr DeepFakes, to block access to UK users. The EU’s AI Act, now in effect, requires clear disclosure that material was created by AI. In the US, the Senate passed the Defiance Act, allowing victims to seek civil remedies for sexually explicit deepfakes, though it still needs to pass in the House.

However, more work remains. Google could further prevent deepfake sites from appearing in search results, treating them like child pornography websites. Ajder also points out that Google’s announcement did not address deepfake videos, only images.

Reflecting on my earlier article, I realize more can be done by companies. While Google’s changes are a good start, app stores still host apps for creating nude deepfakes, and payment providers support these apps’ use. Ajder urges a radical shift in how we view nonconsensual deepfakes, calling for actions akin to those taken against child pornography.

“This issue should be viewed and treated online with the same revulsion as child pornography,” Ajder states. “All platforms need to take action.”

Deeper Learning

End-of-life decisions are difficult and distressing. Could AI help?

A few months ago, a woman in her mid-50s, referred to as Sophie, suffered a hemorrhagic stroke, causing significant brain damage. Her family struggled to decide on her medical care, a common challenge in such situations, causing distress for all involved, including Sophie’s doctors.

Enter AI: David Wendler, a bioethicist at the US National Institutes of Health, explains that AI could help surrogates make these tough decisions by predicting what the patients would want. Wendler and his team are developing an AI tool to assist surrogates. Read more from Jessica Hamzelou here.

Bits and Bytes

  1. OpenAI has released a new ChatGPT bot that you can talk to
    • This new chatbot represents OpenAI’s push into a new generation of AI-powered voice assistants, similar to Siri and Alexa but with more advanced conversational capabilities. (MIT Technology Review)
  2. Meta has scrapped celebrity AI chatbots after they fell flat with users
    • Less than a year after introducing AI chatbots based on celebrities like Paris Hilton, Meta is discontinuing the feature. Users showed little interest in chatting with AI celebrities. Instead, Meta is launching AI Studio, allowing creators to make AI avatars to chat with fans. (The Information)
  3. OpenAI has a watermarking tool to catch students cheating with ChatGPT but won’t release it
    • The tool can detect AI-generated text with 99.9% accuracy, but OpenAI has not released it, fearing it might deter users from their AI products. (The Wall Street Journal)
  4. The AI Act has entered into force
    • The EU’s AI Act, now in effect, requires companies to start complying with new regulations aimed at mitigating the worst harms of AI. This law marks significant changes in AI development and usage in the EU and beyond. (The European Commission)
  5. How TikTok bots and AI have powered a resurgence in UK far-right violence
    • Following a tragic stabbing incident involving three girls in the UK, there has been a surge in far-right violence. Rioters have used AI to generate images and music inciting hatred, which have spread widely due to recommendation algorithms. (The Guardian)

Latest articles