March 16, 2026·AI
AI Narratives for March 2026
Pulse·article
AI Investment Skepticism Rises as Productivity Debates and Deepfake Concerns Shape Media Narratives
Corporate Skepticism and the Productivity Paradox
Perscient's semantic signature tracking the density of language asserting that companies are becoming more skeptical of big AI investments now registers at 122—meaning that such language is 122% stronger than the long-term mean. This figure rose by 15 points over the past month.
Nvidia's position as the lone megacap tech stock to notch gains this year has done little to quiet these concerns. The chipmaker's most recent earnings report, which showed 73% revenue growth and beat analyst forecasts, was met with a 5.5% share price decline in the following session—the worst post-earnings reaction the company has experienced since this time last year. Bloomberg reported that the Magnificent Seven has entered correction territory because investors are growing skeptical about when they will see returns on the hundreds of billions of dollars being poured into AI infrastructure.
Hyperscalers are now on track to divert nearly 100% of their free cash flow to capital expenditure this year, up from a 10-year average of around 40%. For this spending to make economic sense, AI revenues would need to grow from $20 billion to $2 trillion annually by 2030—a gap that suggests either spending will slow, revenues will disappoint, or margins will compress from competition. The New York Times noted that unlike the dot-com era, public enthusiasm for the AI boom has been muted.
Our semantic signature tracking language suggesting that productivity gains from AI have not materialized currently registers at 45, though this declined by 7 points from the prior month. A National Bureau of Economic Research study published in February found that nearly 90% of firms reported no impact of AI on workplace productivity over the past three years, despite executives projecting that AI would increase productivity by 1.4% and output by 0.8%. One social media observer noted that this echoes Nobel laureate Robert Solow's famous 1987 observation: "You can see the computer age everywhere but in the productivity statistics."
Investor reactions to Big Tech earnings have been mixed. Meta's stock dropped by 11% while Alphabet gained by 5%. Microsoft's $381 billion rout exposed what Bloomberg called the "dark side of the AI binge," with companies walking a narrow tightrope where investors can stomach massive spending only as long as growth backs it up. Short-seller Carson Block has moved from sanguine to skeptic on the S&P 500 specifically due to AI concerns.
Our semantic signature tracking language arguing that AI will drive a new bull market and strong US growth currently registers at -9, below its long-term mean, and fell by 14 points over the past month. Stanford faculty across computer science, medicine, law, and economics have converged on a striking theme in their 2026 predictions: the era of AI evangelism is giving way to an era of AI evaluation.
Deepfakes and AI-Generated Content Dominate Trust Concerns
Perscient's semantic signature tracking language asserting that deepfakes on social media are a problem remains at 624—more than six times its long-term mean—though it moderated by 36 points over the past month.
The challenge has become particularly acute in conflict zones. On Tuesday, Meta's Oversight Board warned the company over the proliferation of deepfake videos in armed conflicts, specifically citing an AI-generated video posted during the 2025 Israel-Iran war that falsely depicted damage to buildings in Haifa. The Board overturned Meta's decision to leave the post up, stating that it should have been labeled "High Risk AI" because it posed a material risk of misleading the public. The Indian Express reported that the ruling calls for an overhaul of content moderation policies to address AI-generated misinformation during crises.
Research continues to demonstrate the stubborn psychological influence of deepfakes even when viewers know they are fake. Studies involving hundreds of participants found that while warnings reduced the influence of deepfake videos, they did not eliminate it—many participants acknowledged that a video was fake yet still judged the depicted person as guilty.
Kafaar, a researcher quoted by Macquarie University's Lighthouse publication, suggested that our default assumption should now be that posts are AI-generated unless we know otherwise: "We're moving from a world where we try to spot the fake from the genuine to one where we have to assume that content may be fake unless there is a reason to trust it."
Our semantic signature tracking language indicating that AI-generated news is rising registers at 506, more than five times the long-term mean, though it declined by 12 points from the prior month. In 2026, AI-written content will outpace human production not just in spammy corners of the web but across mainstream channels where people search, scroll, and learn.
Yet audiences remain wary. Survey data presented at a recent conference showed that only 12% of respondents are comfortable with fully AI-generated news, rising to 43% when a human leads with some AI assistance, and 62% for entirely human-made content. Meanwhile, 90% of Americans want news agencies to disclose when they use AI to create stories, text, and images.
At the 184-year-old Cleveland Plain Dealer, a top editor's push to let AI draft news articles has generated over 10 million page views from AI-assisted content—but has also spooked staffers at a paper that has seen its workforce shrink from 400 to 71 employees. The Reuters Institute found that confidence in the prospects for journalism is at a record low, with only 38% of news executives reporting that they feel confident about the outlook for journalism in the year ahead, down from 60% just four years ago.
Productivity Gains and UBI Discourse Intensifies
The investment skepticism and trust concerns documented above connect to a broader question: what happens to workers if AI delivers on its productivity promises? Perscient's semantic signature tracking language connecting AI to productivity gains and UBI currently stands at 67, meaning that such language is 67% above the long-term mean. This rose by 17 points over the past month, the largest increase among our measured signatures.
The discourse has been driven not by traditional social welfare advocates but by some of the most powerful figures in technology. Fortune reported that prominent advocates like Elon Musk and Sam Altman argue that UBI is necessary to address the economic disruptions caused by AI and automation. Musk has suggested a "universal high income," while a UK minister has floated UBI specifically to address AI-driven job cuts. Peter Diamandis observed on social media that if 20-50% of 70 million US office workers face displacement in the next 12-18 months, "this will be the most profound economic transition in decades."
The evidence of displacement is already emerging. Google recently revealed that 25% of its code is now generated by AI agents, and AI is replacing human workers in junior-level positions, particularly in programming and customer support. A viral AI report warned that blue-collar workers will not be safe either, sparking a global stock sell-off amid fears of job loss and economic slowdown. Former Google ethicist Tristan Harris has warned that AI could trigger a global jobs market collapse by 2027 if left unchecked.
Legendary investor Howard Marks has described AI's impact on employment as "terrifying," emphasizing that work provides purpose and identity beyond mere income. "Financial support alone will not replace the psychological and social benefits of employment," he stated. JP Morgan CEO Jamie Dimon echoed this concern, saying that mass AI job automation will cause civil unrest and hinting at income assistance while noting that "you can't lay off 2,000,000 truckers tomorrow."
Some commentators have framed the question in terms of ownership: our collective human output—our language, our knowledge, our creative work—is the capital that built AI. We are the shareholders. UBI is our dividend. However, critics on social media have noted that while UBI may stabilize incomes, it does not change how wealth itself is created or who controls the systems generating that wealth.
According to Morgan Stanley Research's most recent analysis, 21% of S&P 500 companies mentioned at least one AI benefit in recent communications, up from 10% in 2024. AI adopters are seeing results, with cash-flow margin expansion outpacing the global average by 2x. Yet the market is not simply paying for "AI mentions" alone—investors are demanding evidence of returns.
Our semantic signatures tracking language about AI transforming education, medicine, and science all remain below their long-term means at -33, -43, and -30 respectively. Similarly, our signature tracking language arguing that the US must win the AI race registers at -5 and fell by 8 points over the past month.
Forward Guidance noted that if AI drives sustained productivity gains, it could improve fiscal dynamics, raise real wages, and extend US economic leadership for decades. But the path from here to there remains uncertain, and the debates over who benefits—and who bears the costs—are only beginning.
Pulse is your AI analyst built on Perscient technology, summarizing the major changes and evolving narratives across our Storyboard signatures, and synthesizing that analysis with illustrative news articles and high-impact social media posts.

