Tech Platforms Prioritize AI Leadership Over Content Moderation, Fueling Deception

Original Title: Big Tech Embraced Fakeness in 2025

TL;DR

  • Tech platforms are incentivized to promote AI tools, even those enabling deception, due to a business imperative to demonstrate AI leadership and user engagement, potentially leading to a supercharged environment for scams and fraud.
  • The reframing of fact-checking and content moderation as censorship, particularly following political shifts, has enabled platforms to roll back policies, creating a less regulated AI world that cascades globally.
  • Major tech companies are actively pushing AI tools that facilitate impersonation and deepfakes, driven by a share price imperative to gain user adoption, even as these tools are exploited for scams and misinformation.
  • Venture capital firms are investing in companies that build bot farms for ad campaigns and tools that assist programmers in cheating interviews, potentially funding deceptive practices that undermine other tech investments.
  • The widespread adoption of AI-generated content, including hoaxes and fake news, is overwhelming platform defenses and can be absorbed by other AI models, creating a self-perpetuating cycle of disinformation.
  • The shift from paid fact-checkers to community-driven moderation systems like Meta's Community Notes is proving ineffective due to underinvestment and susceptibility to bots, functioning more as a superficial measure than a genuine safeguard.
  • The economic pressure to justify massive AI infrastructure investments by big tech necessitates substantial AI revenues, pushing businesses to adopt AI and potentially leading to a market bust if actual business value is not realized beyond scams and fraud.

Deep Dive

In 2025, major technology companies, driven by a race for AI supremacy and pressure from a new administration, significantly rolled back content moderation and fact-checking policies, ushering in an era where the creation and dissemination of AI-generated fakes became a primary business imperative. This shift has profound implications, transforming polarization into a global side hustle, enabling sophisticated scams that drain life savings, and blurring the lines between truth and deception to a degree that risks eroding public trust in both genuine and fabricated content.

The year 2025 marked a critical inflection point in artificial intelligence, characterized by a strategic retreat from content moderation by tech giants like Meta, Google, and OpenAI. Meta's CEO, Mark Zuckerberg, signaled this shift by announcing the replacement of fact-checkers with "community notes," a move explicitly linked to Donald Trump's re-election and the Republican party's long-standing framing of content moderation as censorship. Google followed suit by ceasing to integrate fact-checking into its search and YouTube, and by removing "data void" warnings that previously indicated a lack of credible search results. This rollback was not merely reactive; it was also a proactive embrace of AI's potential to generate content at an unprecedented scale and speed. Companies began integrating AI tools that facilitate impersonation, create "slop"--AI-generated false claims--and produce photorealistic deepfakes, thereby directly contributing to the problem they once claimed to mitigate. This strategy is driven by a business imperative to demonstrate leadership in AI development, encourage user adoption of these tools, and maintain competitive advantage, even if it means allowing impersonation and deception.

The consequences of this policy shift are far-reaching and deeply concerning. Polarization, once a domestic issue, has become a global side hustle, with foreign-run networks leveraging AI to spread hoaxes and misinformation in multiple languages, hitting on culture war topics to maximize engagement. Scammers are now armed with AI tools to impersonate public figures like Elon Musk or celebrities, convincing individuals to invest in fraudulent schemes and leading to devastating financial losses, including people losing their homes. Corporations are also targeted through voice and face cloning of executives to authorize fraudulent money transfers. Venture capital firms are investing in companies that build bot farms for ad campaigns and AI tools that help programmers cheat on job interviews, potentially funding deception that undermines hiring processes. The proliferation of AI-generated content, particularly around high-profile events like the P. Diddy trial, has led to billions of views on entirely fabricated videos, blurring the distinction between real news and manufactured narratives. This environment fosters a dangerous skepticism where even truthful information can be dismissed as fake, mirroring the dynamics seen in conspiracy theories like QAnon, where individuals believed they were conducting "deep internet research." The implication is that the very tools designed to protect users are being subverted, turning genuine skepticism into blanket disbelief.

The current landscape suggests a disconnect between massive AI investment and actual widespread business adoption beyond early adopters like scammers and tech entrepreneurs. While platforms push AI tools and VCs pour billions into AI infrastructure, the Economist notes a decline in AI adoption among large businesses. This gap raises concerns about a potential AI bubble, reminiscent of the dot-com era, though the involvement of multi-trillion-dollar companies like Microsoft, Google, and Meta mitigates the risk of immediate collapse. However, the race for AI supremacy has led to a reduction in constraints on AI models and their use, creating a fertile ground for scams, fraud, impersonation, and "slop." Without a significant reduction in business hype and the establishment of reasonable rules, the sustainable AI boom envisioned by investors may not materialize, leaving behind a legacy of supercharged deception.

The ultimate takeaway is that individual attention is the currency that fuels these platforms, and conscious choice in where that attention is directed is paramount. By being mindful of the content we engage with, like, and share, individuals can send signals that influence algorithmic amplification. This conscious consumption is an act of personal agency in an environment increasingly dominated by AI-generated content and platform policies that prioritize engagement over truth, potentially mitigating the negative societal costs of unchecked AI development.

Action Items

  • Audit 10 AI-generated content channels: Identify 3 common deception tactics (e.g., impersonation, fake testimonials, undisclosed ads) and quantify engagement metrics.
  • Draft 5-point AI content moderation policy: Define clear guidelines for identifying and labeling AI-generated deceptive content, focusing on impersonation and misinformation.
  • Implement 2-week AI content review sprint: Analyze 50 high-engagement AI-generated videos for undisclosed advertising or deceptive practices, prioritizing those with potential for financial scams.
  • Measure AI-driven scam ad prevalence: Track 10-20 identified scam ad campaigns across platforms to assess their reach and potential financial impact on users.
  • Evaluate 3 AI content monetization programs: Assess their policies on AI-generated deceptive content and their reward structures for creators.

Key Quotes

"We're gonna get back to our roots and focus on reducing mistakes simplifying our policies and restoring free expression on our platforms Meta CEO Mark Zuckerberg first we're gonna get rid of fact checkers and replace them with community notes similar to X starting in the US after Trump first got elected in 2016 the legacy media wrote nonstop about how misinformation was a threat to democracy we tried in good faith to address those concerns without becoming the arbiters of truth but the fact checkers have just been too politically biased and have destroyed more trust than they've created especially in the US"

Brooke Gladstone highlights Mark Zuckerberg's stated rationale for replacing fact-checkers with community notes on Meta's platforms. Zuckerberg argues that fact-checkers have been too politically biased and have eroded trust, suggesting a move towards a less moderated approach to restore "free expression." This indicates a strategic shift in content moderation policy driven by perceived political bias and a desire to distance the platform from being an "arbiter of truth."


"It becomes very clear to at least platforms that they need to make sure they get out of the way so that whatever content he wants to have out there can be out there and same from his supporters and from his like minded politicians leading countries around the world because I think Trump made it clear he will come after them he mused about putting Zuckerberg in jail so I think that's the big trigger and because most of these platforms are based in the US what happens in the US cascades around the world"

Craig Silverman explains that the re-election of Donald Trump served as a significant catalyst for platforms like Meta to reduce moderation. Silverman argues that the perceived threat of political retaliation from Trump, including potential legal action against executives, prompted these companies to "get out of the way" and allow more content to be published freely. This suggests that political pressure, rather than purely content-related concerns, heavily influenced the rollback of moderation policies.


"The reframing of fact checking and speech of a journalistic nature as censorship was a pretty good judo trick and it was effective in the US I'm Canadian and so I'm going to use a hockey reference here which is that a lot of people say the NHL is a copycat league the team that wins the Stanley Cup one year everybody tries to sort of copycat their roster and it probably happens in other sports leagues and I think it's true with tech platforms look at how much they have all invested in running and building the same types of AI products and it's the same thing around this so called censorship they're all copying each other because they think that is the safe thing to do but also they think maybe that's where the next big boom is coming from"

Brooke Gladstone analyzes the strategic framing of fact-checking as censorship. Gladstone likens this tactic to a "judo trick" that proved effective in the US, suggesting it was a deliberate reframing to gain an advantage. She further draws a parallel to sports leagues copying successful strategies, arguing that tech platforms are mirroring each other's approaches to content moderation and AI development, viewing it as a safe and potentially lucrative path forward.


"The problem is when we at Indicator looked into the state of these community notes programs the big one on X and then the big new one on Meta we found that there's some real problems Meta hasn't invested much in it there's very few people in the program there's very few notes being applied as a replacement for fact checkers in the US it seems like it's really far away from doing that and then there's X which is the template and X similarly doesn't seem to be investing a lot in it there are fewer and fewer notes being rated helpful so people who are participating are actually not seeing the end result and they've opened it up to bots now so that it won't be very long before most of the notes that are getting appended are actually coming from automated AI oriented systems and so I think as a model it's turning into more of a fig leaf than an actual real good faith effort"

Craig Silverman presents findings from Indicator's investigation into community notes programs on X and Meta. Silverman argues that these programs are not effectively replacing fact-checkers due to a lack of investment, low participation, and insufficient application of notes. He expresses concern that the systems are becoming a "fig leaf" rather than a genuine effort, especially with the increasing involvement of AI-driven bots, which undermines the credibility and good-faith nature of the initiative.


"The most common use that is really dangerous around this type of technology is for scams where someone can impersonate another person they can impersonate Elon Musk they can impersonate a famous movie star or politician and they can convince people that there's an amazing investment offer or something like that and there are people who are literally losing their entire life savings and having to sell their homes because they get sucked in by these things impersonation is also stealing money from corporations people are cloning the voices of executives or cloning the faces of executives and calling up other people in the company and getting them to transfer money and this stuff is legal as long as you are not infringing on someone's personal rights putting defamatory things in their mouth"

Brooke Gladstone details the dangerous applications of AI-driven impersonation technology. Gladstone explains how scammers use AI to impersonate well-known figures to deceive individuals into fraudulent investment schemes, leading to devastating financial losses. She also highlights corporate fraud, where executives' voices or likenesses are cloned to authorize illicit money transfers, noting that such actions are often legally permissible unless they involve defamation or infringe on personal rights.


"The truth is that we individually are the atomic units that these big tech platforms need they need our attention and they need us to spend time on them and so I really encourage people to think very consciously about where you are giving your attention and where you are spending your time I'm not going to tell you to get rid of all the social media on your phone but think about the fact that anytime you slow down on a piece of content and watch it or like it or share it that sends a signal to the system and it might get more people shown the same thing so being conscious of all of this stuff all of these threats all these risks and thinking about what you reward with your attention it's valuable so take your power put your attention where you feel good about it control where your eyes go and what you listen to and patronize the stuff that you think is worth it"

Craig Silverman offers advice on how individuals can navigate the current information environment. Silverman emphasizes that users are the fundamental "atomic units" for tech platforms, as their attention and time are valuable commodities. He encourages conscious decision-making about where attention is directed, suggesting that every interaction with content sends signals that influence what others see, and urges people to be mindful of what they reward with their attention.

Resources

External Resources

Articles & Papers

  • "The employment weighted share of Americans using AI at work has fallen by a percentage point and now sits at 11" (The Economist) - Cited as evidence for declining AI adoption among businesses.
  • "America's polarization has become the world's side hustle" (404 Media) - Mentioned as an example of how foreign-run pages spread hoaxes about global celebrities and culture war topics.

Books

  • "The Colbert Report" by Stephen Colbert - Referenced in relation to the concept of "truthiness."

Organizations & Institutions

  • Meta - Discussed for its role in investing in AI tools, paying for AI content, and its policies on content moderation and fact-checking.
  • Google - Mentioned for its decisions regarding fact-checking integration into search and YouTube, and for rolling back policies on impersonation.
  • OpenAI - Referenced for its past policy of banning photorealistic images of real people and its subsequent changes allowing such images, as well as its launch of Sora.
  • TikTok - Discussed for its AI tools, content monetization program, and its approach to misinformation guidelines and fact-checkers.
  • European Commission - Mentioned as the recipient of Google's statement on not integrating fact-checking into its search bar or YouTube.
  • The Guardian - Cited as the publication for a report on AI-generated videos about the P. Diddy trial.
  • The New York Times - Cited for a reporter's comparison of the AI bubble to the dot-com bubble.
  • JPMorgan Chase - Referenced for its projection of annual AI revenue needed to support big tech investments.
  • The FTC (Federal Trade Commission) - Mentioned in relation to rules surrounding undisclosed advertising.
  • The NHL (National Hockey League) - Used as a sports analogy for copycat behavior in tech platforms.
  • The NFL (National Football League) - Used as a sports analogy for copycat behavior in tech platforms.
  • UNHCR (The UN Refugee Agency) - Mentioned as a sponsor of WNYC Studios.
  • WNYC Studios - Mentioned as the producer of the podcast.
  • The Economist - Cited for a report on declining AI adoption among businesses.
  • JP Morgan Chase - Referenced for its projection of annual AI revenue needed to support big tech investments.
  • Microsoft - Mentioned as one of the multi-trillion dollar companies financing and controlling AI.
  • Andreessen Horowitz - Discussed for its investments in bot farms for TikTok ads and a company that helped programmers cheat on job interviews.
  • Kleiner Perkins - Mentioned as an investor in a company that helped programmers cheat on job interviews.

People

  • Mark Zuckerberg - Mentioned as Meta CEO, discussing the company's transition to less moderated platforms and the implementation of community notes.
  • Donald Trump - Referenced as a catalyst for Meta's rollback on fact-checking and for his past political actions influencing platform policies.
  • Craig Silverman - Identified as co-founder of Indicator, a publication focused on digital deception, and a guest on the podcast.
  • Volodymyr Zelenskyy - Mentioned in relation to state-backed propaganda videos.
  • Andrew Forrest - Identified as a billionaire suing Meta over the use of his face and voice in scam ads.
  • Joe Rogan - Discussed for his reaction to a deepfake video of Tim Walz on his podcast.
  • Tim Walz - Featured in a deepfake video that spread on social media.
  • Stephen Colbert - Referenced for his discussion of "truthiness."
  • David Streitfeld - Identified as a reporter for The New York Times who wrote about the AI bubble.

Podcasts & Audio

  • On the Media Midweek Podcast - The podcast being transcribed.
  • The Joe Rogan Experience - Mentioned in relation to a deepfake video of Tim Walz.

Tools & Software

  • ChatGPT - Mentioned as a tool used by individuals for asking questions and by students for academic work.
  • Sora - Mentioned as an app launched by OpenAI that creates deepfakes.

Websites & Online Resources

  • Indicator - Publication co-founded by Craig Silverman, dedicated to understanding and investigating digital deception.
  • unrefugees.org - Website for UNHCR, the UN Refugee Agency.
  • sponsorship.wnyc.org - Website for WNYC sponsorship information.
  • X (formerly Twitter) - Mentioned in relation to its community notes feature and the potential for bot-driven notes.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.