January 29, 2026·Media

Peak Deepfake

Matt Zeigler·article

Deepfake concerns just hit a level on the Panoptica Media Storyboard higher than anything we've tracked. It blew past the 2024 election peak, which surprised me when I noticed the already extreme level of this data point early this month. From November 2025 to January 2026, it’s almost vertical. That gap - missing a story this big - is exactly why I use these Storyboards.

DEEPFAKES.png

So what happened? Because we all knew deepfake concerns were only going to get worse, but why now? 

xAI launched Grok's “Spicy” mode in August 2025. By late December 2025, non-consensual sexual imagery was being generated and shared at scale. The safeguards and norms that companies like Google, OpenAI, and others had built - the ones tied to basic standards and social expectations - got overrun, completely. 

By late 2025, when the Signature really started accelerating, it wasn’t because people were debating deepfakes in theory anymore, but because deepfakes were happening to real people in real time, at a volume that made ignoring it impossible.

One woman's professional photograph became unprofessional within hours, without consent. It cost her days of work and forced her to appeal to regulators when the platform wouldn't take it down. She’s Jane Doe in a class action lawsuit now. And hers is a relatively mild example of other allegations. 

I wish I could tell you the other stories don't make this very serious story feel tame. But the reports are harrowing if you opt to read them, and they range from awful to unspeakable.

Which is why the Signature below spiked alongside it, and if you missed the scale of the prior story like I did, here's where it gets even worse:

SOCIAL MEDIA IS HARMING.png

The severity of these Signatures has resulted in a global regulatory response moving at emergency speed. Legal actions followed quickly. Ofcom opened an investigation on January 12th. By January 26th - EU probe, class action lawsuits (mentioned above), coordinated letters from 30+ state attorneys general, scrutiny signals from various other countries as well. There’s no debate over whether there was a problem. There’s only an inquisition into how it had happened at all.

The allegations in the Doe vs. xAI Corp. Complaint say the company “knew the danger the technology could pose to women and girls but has chosen instead to capitalize on the internet's seemingly insatiable appetite for humiliating non-consensual sexual images."

They don’t frame this as an innocent mistake. They're not framing this as users acting without the company's knowledge or consent. They’re saying xAI built the tool, set the parameters, and enabled the output. They are only calling this a corporate choice.

This Storyboard tells us when theoretical concern became actual practice. It tells us when the people and the media and the regulators realized the uncomfortable truth, that safeguards aren't technical constraints, especially for users of artificial intelligence. These are choices companies make, or don't.

xAI rejected them. The panic proves we all saw this coming - not because we predicted the moment, but because we understood what happens when a company decides safeguards aren't worth the friction.

Remember, you can track these Storyboards and many others at Panoptica.

The Latest From Panoptica