January 2, 2026·Media
Media Narratives January 2026
Pulse·article
AI, Social Media, and Radicalization Pressuring Already Beleaguered Media Trust Narratives
Narratives of Concern About AI-Generated News Content Reach Record Density
December's data revealed that language expressing concerns about the volume of artificial intelligence involved in news content reached levels roughly six and a half times typical baseline readings—the highest density among all tracked signatures this month. This represents a 56-point increase from November's already elevated levels.
The concern isn't theoretical. AI-generated videos have flooded social media platforms, fooling millions of users even when warning labels are present. Apps like OpenAI's Sora are producing content realistic enough that viewers struggle to distinguish synthetic from authentic footage. The Reuters Institute found that 19% of people report encountering AI bullet-point summaries of news stories, while 16% have interacted with AI chatbots answering questions about current events. This represents a fundamental shift in how audiences encounter journalism.
The business implications are equally significant. While AI-generated summaries may reduce clicks on original articles, publishers participating in pilot programs will receive direct payments from Google—a potential restructuring of journalism's economic foundations. Yet the year also brought cautionary tales, as brands and publishers faced backlash from AI-driven content experiments producing factual errors, with newsrooms stumbling since 2022 through inaccurate AI articles and fabricated quotes.
These concerns about synthetic journalism intersect with broader anxieties about media consumption's cognitive effects. Language about perpetual media exposure altering brain structure rose by 62 points to 220 percent above average, while discussion of brief content formats degrading comprehension increased by 20 points to just over double typical levels.
Deepfake Threats Dominate Social Media Discourse
The density of language claiming that deepfakes on social media platforms (a close cousin of AI-produced news content) represent a serious problem remained at levels 344 percent above its long-term average – and it was already the second-highest reading among tracked signatures! This sustained elevation reflects that deepfake concerns have become increasingly embedded in how people think about social media rather than representing a passing worry.
The technological reality justifies this concern. Cybersecurity firm DeepStrike estimates online deepfakes increased from roughly 500,000 in 2023 to about 8 million in 2025—annual growth approaching 900%. For everyday scenarios like low-resolution video calls and social media content, deepfake realism has reached the point where synthetic media becomes indistinguishable from authentic recordings for nonexpert viewers.
Voice cloning has crossed what researchers call the "indistinguishable threshold," where a few seconds of audio suffices to generate convincing replicas. Some major retailers report receiving over 1,000 AI-generated scam calls per day, with hyper-realistic voice cloning fueling fraud while eroding trust in digital communication. The Nomani investment scam surged 62% using AI deepfake advertisements on social media, illustrating how synthetic media enables fraud at unprecedented scale.
These deepfake concerns exist within a broader narrative about how social platforms manipulate users. As we’ll dive into in greater detail in just a moment, the density of language arguing that social platforms are actively radicalizing young males rose by 14 points to just over double its long-term average, while discussion of concerns about attention spans shortened by exposure to social media and disposable AI-generated slop content increased by 39 points to 96 percent above baseline. Talk of platforms weaponizing outrage for engagement rose by 43 points to 95 percent above typical levels. In short, media is narrating the story of its own weaponization and role in long-term damage to society in real-time.
Child Safety and Social Media Harms Drive Legislative Action
The intensifying concern about these trends toward synthetic media and platform manipulation has fueled concrete policy responses, particularly regarding the protection of children. Language asserting that social media harms children surged by 46 points to reach 172 percent above average—one of the month's largest increases. This density accompanied concrete legislative action, with bipartisan legislation introduced to establish a minimum age of 13 for social media use and prevent companies from feeding algorithmically-targeted content to users under 17.
New York Governor Kathy Hochul signed legislation in December 2025 requiring social media sites with addictive features like autoplay, infinite scroll, and algorithmic feeds to display warning labels to all users under 18. The law's sponsors were explicit about their rationale: "The growing evidence is clear: social media is making kids more depressed, more anxious, and more suicidal." Over 45 states and Puerto Rico have at least 300 pieces of legislation pending in 2025 related to social media and children.
Australia became the first country to ban social media for children under 16, with France targeting a similar ban from September 2026. Yet UNICEF warned that age-related bans may not actually keep children safe, suggesting the policy response remains uncertain despite clear political momentum.
The density of language discussing body image problems for young adults rose by 28 points to 38 percent above average, while language connected with concerns about the effects of online bullying increased by three points to 28 percent above the long-term mean. Policymakers' central concern centers on children's exposure to harmful content and mental health impacts, though these harms extend beyond underage users—addictive design features and negative psychological effects affect users across all age groups.
Young Men and Online Radicalization Narratives Intensify
As highlighted in other recent Pulse reports, the policy focus on protecting children from platform harms parallels growing concern about how social networks affect young men specifically. The density of language claiming that social networks radicalize young men rose by 14 points to just over double its long-term average. Analysis of over 2,200 posts by Andrew Tate reveals that his content, often framed as self-improvement, serves as a gateway to extremist and misogynistic ideologies, mirroring radicalization tactics seen in extremist groups.
Research shows that misogynistic content online targets mostly young men ages 13-25 who report feelings of social isolation, with content often appearing as inspirational self-improvement containing both covert and overt misogynistic views. Algorithms on social media platforms reinforce radical ideologies, with young users engaging with extremist content frequently exposed to more of the same, creating echo chambers where radical views become normalized.
A think tank executive warned of how easy it is now for young men to find themselves engaging with far-right extremist content, with vulnerable people being "groomed" into terrorist groups through social media and messaging platforms. This seems to be part of a larger rise in polarizing effects of social media. To that end, the density of Perscient’s semantic signature tracking language describing social media echo chambers polarizing American politics rose by six points to 34 percent above average.
Discussion of parasocial relationships replacing traditional authority declined by 11 points but remained at 22 percent above the long-term mean, while language about meme culture replacing public discourse fell by four points to 28 percent above average. The data suggests that while some aspects of online culture's impact on young men may be moderating, the core concern about radicalization remains firmly established.
Trust in Traditional News Reaches Historic Lows Amid Industry Transformation
The concerns about synthetic content, platform manipulation, and radicalization occur against a backdrop of collapsing confidence in traditional journalism. Language arguing that Americans are losing trust in media rose by 26 points to 89 percent above its long-term average in December. This is confirmed by survey data. Americans' confidence in mass media edged down to a new low, with just 28% expressing trust in newspapers, television, and radio to report news fully, accurately, and fairly—down from 31% last year and 40% five years ago.
The decline appears across all major partisan groups, though Republicans' confidence now sits in single digits while independents remain largely skeptical. In 2016, 51% of U.S. adults said they followed the news all or most of the time, but that share fell to 36% in 2025—suggesting that Americans are not just distrusting news but disengaging from it entirely.
Accordingly, Perscient’s semantic signature tracking the density of language suggesting that traditional news is a dying industry increased by 12 points to 67 percent above average, while language attacking mainstream media as "the enemy of the people" rose by nine points to 57 percent above the long-term mean. Overall trust in news has remained at 40% for the third year in a row across 47 markets, with the abundance of news making people seek impartiality, accuracy, and transparency in reporting.
If there is a place to be encouraged, is that countervailing narratives describing a free press as democracy's lifeblood increased by six points to 57 percent above its long-term mean. Reuters Institute research found that respondents wanted journalists to spend their time investigating powerful people and providing depth rather than chasing algorithms for clicks—suggesting that audiences still value traditional journalistic functions even as they express distrust in the institutions delivering them. The tension between journalism's perceived importance and its collapsing credibility defines the industry's current moment.
Archived Pulse
December 2025
- Deepfake Concerns and AI-Generated News Content Dominate
- Social Media Harms to Children Drive Policy and Platform Changes
- Young Men and Online Radicalization Narratives Gain Attention
- Influencers Challenge Traditional Media Power Structures
- Trust, Misinformation, and Algorithm Concerns Intensify Across Media
November 2025
- Deepfakes Reach Critical Mass as Detection Becomes Near-Impossible
- AI-Generated News Content Proliferates Across American Journalism
- Newsletter Renaissance Accelerates as Substack Transforms Media Distribution
- Attention Spans Collapse Under Weight of Short-Form Content
- Social Listening Emerges as Alternative to Traditional Polling Methods
Pulse is your AI analyst built on Perscient technology, summarizing the major changes and evolving narratives across our Storyboard signatures, and synthesizing that analysis with illustrative news articles and high-impact social media posts.

