AI Slop Erodes Internet Trust, Demanding New Media Literacy - Episode Hero Image

AI Slop Erodes Internet Trust, Demanding New Media Literacy

Original Title:

TL;DR

  • The proliferation of AI-generated content, termed "slop," fundamentally alters our relationship with the internet by shifting the default assumption from reality to skepticism, overwhelming traditional media literacies.
  • Digital platforms' design incentives prioritize engagement over accuracy, meaning they lack economic motivation to combat AI-generated misinformation, exacerbating existing societal distrust.
  • The speed and sophistication of AI tools have outpaced human ability to develop effective discernment skills, making it nearly impossible for individuals to reliably identify AI-generated content.
  • While AI can mimic reality, it lacks the authentic emotional resonance of human-created art and media, leading to a disconnect that may eventually reduce user engagement.
  • Legacy media organizations have an opportunity to rebuild trust by emphasizing human verification processes, but face an uphill battle against platforms designed for rapid, uncritical consumption.
  • The increasing difficulty in distinguishing AI content from reality forces a re-evaluation of what constitutes genuine human experience and connection, potentially leading to a renewed appreciation for authentic creation.
  • Corporations are beginning to proactively demonstrate human involvement in their content creation to counter distrust, signaling a societal shift towards valuing the "human hand" in art and media.

Deep Dive

The proliferation of AI-generated content, or "AI slop," is fundamentally altering our relationship with the internet by eroding the default assumption of authenticity. This shift from "believing by default" to "skepticism by default" is not merely an aesthetic concern but a systemic challenge to information consumption, trust, and even our understanding of human connection. While AI can mimic reality and engage users emotionally on a superficial level, its lack of genuine human experience means it fundamentally fails to communicate authentic emotional resonance, creating a profound disconnect that media organizations and individuals must navigate.

The core implication of AI-generated content is a pervasive breakdown in trust, exacerbated rather than created by the technology. The speed, sophistication, and sheer volume of AI output have outstripped our capacity to develop effective individual or societal defenses, rendering traditional media literacy skills insufficient. This dynamic presents a significant challenge for media organizations, as social media platforms, driven by engagement metrics rather than informational integrity, have little economic incentive to foster media literacy. Instead, their design prioritizes emotional appeal and ease of use, making them fertile ground for AI content that can capture attention without requiring genuine human connection or verified information. This creates a dangerous feedback loop where distrusted institutions and AI-generated content mutually reinforce each other, leading to a societal fragmentation where agreement on even real imagery becomes impossible.

The long-term consequence of this "AI slop" is a heightened, and perhaps necessary, re-evaluation of what constitutes authentic human experience and connection. While the "dark days" of wading through AI-generated noise may be ahead, this inundation could ultimately foster a greater appreciation for human-crafted content and genuine interaction. Media organizations can leverage their existing infrastructure for verification and fact-checking to become trusted sources in this environment. For consumers, the key lies in developing a reflexive skepticism, interrogating content that elicits an overly strong positive or negative reaction, and prioritizing sources that demonstrate a commitment to human oversight. Ultimately, the challenge of AI slop may compel a deeper understanding of the value of human authorship, intention, and emotional resonance, driving a renewed desire for authenticity in a digitally saturated world.

Action Items

  • Audit AI content: Identify 3-5 common visual or textual artifacts that signal AI generation across 10 examples.
  • Create media literacy guide: Define 5 key questions for evaluating online content authenticity (ref: digital technology affordances).
  • Track user engagement metrics: Measure the correlation between AI-generated content exposure and user session duration for 3-5 content types.
  • Develop content verification checklist: Outline 5 steps for journalists to confirm image/video authenticity before publication.
  • Evaluate platform design: Analyze 3 social media platforms for features that either promote or hinder media literacy.

Key Quotes

"I am easily convinced to quickly reshare any sort of funny you know man on the street content and so there was one of a guy saying something that I thought was very funny in a way that I found hilarious and I shared it very quickly this was Instagram I'm pretty sure but something in my gut said that was too funny you know it was too perfect for me and I went back and rewatched it and then caught the sort of unnatural emotion on the face which I think is for the time being anyway still a tell for AI slop and so I unshared it so that I wouldn't participate in the AI slop economy but my defenses are much lower when the content is funny which I suspect is true for all of us."

Tressie McMillan Cottom describes a personal experience of being fooled by AI-generated content, highlighting how humor can lower defenses and lead to the unwitting sharing of "AI slop." She notes that an "unnatural emotion on the face" was a tell, and that her defenses are lower when content is funny, a sentiment she suspects is common.


"Well I think honestly I'm really sad to admit this but I was fooled by the photographs from last weekend of the Nicolas Maduro capture in Venezuela there were a few that went around that were horizontal and show him handcuffed outside of a of a plane and the reason that I believed them to be real and I believed them to be real for like 24 hours was because they had been shared on Instagram by someone who I trusted and that person had created a reel about how they looked very similar to Saddam Hussein pictures from 2003 so I had just kind of gone with it and when we see images on these platforms we're seeing them very small we're seeing them very quickly and both of those things make it very hard to sit with an image and decode it and make sure that it's real."

Emily Keagan recounts being deceived by AI-generated images of Nicolas Maduro's capture, explaining that her trust in the person who shared them and the rapid, small-scale consumption of content on platforms like Instagram made it difficult to discern the fakes. She emphasizes that the platform's design, which encourages quick viewing, hinders critical evaluation of images.


"I think one of the things that people certainly in my world who think about the digital space and the social world are pretty much in agreement about is that this is not a problem that developing the right skill set is going to solve as Emily points out everything about the affordances of digital technology meaning what the app or the tool allows you to do how it sets up and controls and directs your attention is designed to overcome pretty much anything that we would train a person to do."

Tressie McMillan Cottom argues that developing individual skill sets will not solve the problem of AI-generated content, as digital technologies are designed to capture and direct attention in ways that override human critical abilities. She explains that the "affordances of digital technology" are built to overcome our capacity for discernment.


"I would say I'd be even more pointed and say there is no economic incentive for these platforms to do a better job of making consumers more informed and making them more media literate in fact the incentives are in the other direction so when the tools become cheap enough and accessible enough which is what we have seen with things like you know grog or sora which is really popular with to manipulate video and photo images in particular is that the ability to manipulate images has existed for a very long time it is now however sort of democratized it is available to so many more actors."

Emily Keagan asserts that social media platforms lack economic motivation to improve media literacy among users, and in fact, their incentives lie in the opposite direction. She notes that the increasing accessibility and affordability of AI tools for image and video manipulation have democratized this capability, making it available to a wider range of actors.


"I think the feeling that we are having when we see an AI image or an AI video is very similar to the feeling we have when we read text written by AI which is the words can be in the right order you can recognize the form of it as being a sentence a paragraph a story or a book but you do not have the appropriate emotional response to it now we can get mystical and say that there's something about the human spirit right that we infuse into our art and I am not disinclined to believe that but I think that whatever that process is AI can look like reality but it cannot communicate emotionally to us in a way that's that resonates as being authentic."

Tressie McMillan Cottom suggests that the emotional disconnect experienced with AI-generated images and text stems from AI's inability to imbue its creations with authentic human emotion, even if they mimic reality aesthetically. She posits that while AI can replicate the form of communication, it cannot convey the emotional resonance that connects with audiences.


"I think the reason you know that there's not a shark swimming in the flood outside your door is because it's not in the papers of the new york times I think that at this point the only way to know if an image is quote unquote real is if the person who's trafficking it is a place or person that you trust and is verified so listen you're on tiktok at 2 am and you think that the cute baby saying the you know the funny thing you know if it's funny no harm no foul right I think the question becomes when do we allow the image to move us to act whether that means we share it whether that means we get very angry or anxious about you know this is some you know major news development right then the question becomes wait is this real that question becomes all the more important."

Tressie McMillan Cottom advises that in the current landscape, the trustworthiness of the source is the primary indicator of an image's authenticity, rather than inherent visual cues. She suggests that the critical question arises when content prompts action, at which point verifying the source becomes paramount, especially when engaging with content on platforms like TikTok.

Resources

External Resources

Books

  • "The Age of Surveillance Capitalism" by Shoshana Zuboff - Mentioned as a foundational text for understanding digital technologies and their impact on society.

Articles & Papers

  • "The Internet May Look Different After You Listen to This" (The Opinions podcast) - Discussed as the context for a conversation about AI's impact on authenticity and trust online.
  • "The New York Times App" (New York Times) - Mentioned as an example of a well-designed app that provides immediate navigation to content and exposes users to new material.

People

  • Adam Mosseri - Head of Instagram, mentioned for a post about authenticity and AI.
  • Tressie McMillan Cottom - Columnist, joined the conversation to discuss AI's impact on the internet.
  • Emily Keagan - Creative consultant, joined the conversation to discuss AI's impact on the internet.

Organizations & Institutions

  • New York Times Opinion - Mentioned as the source of voices for the podcast "The Opinions."
  • Instagram - Mentioned as a platform where AI-generated content is encountered.
  • YouTube - Mentioned as a platform where a significant percentage of videos shown to new users are AI-generated.
  • Miriam Webster - Declared "slop" as 2025's word of the year.
  • Apple - Mentioned for creating advertisements that show behind-the-scenes footage to prove human involvement.

Websites & Online Resources

  • nytimes.com/app - Mentioned as the download location for The New York Times app.

Other Resources

  • AI (Artificial Intelligence) - The central theme of the discussion, focusing on its impact on authenticity, trust, and media.
  • Wordle - Mentioned as a game played within The New York Times app.
  • Authenticity - Discussed in relation to AI-generated content and its effect on user perception.
  • AI Slop - A term used to describe low-quality or unconvincing AI-generated content.
  • Web 2.0 / Web 1.0 - Referenced as previous stages of the internet where different literacies were taught for information consumption.
  • Dot org / Dot edu / Dot gov - Mentioned as formerly trusted website domain types.
  • Dot com - Mentioned as a website domain type that is now less trusted.
  • Social Institutions - Discussed in the context of declining trust, which exacerbates the impact of AI-generated content.
  • Real Video / Real Photographs - Contrasted with AI-generated content, particularly in the context of events in Minnesota.
  • Print Media - Discussed as having historically developed methods for contextualizing images for viewers.
  • Tech Platforms (Social Media) - Mentioned as platforms built around images and trafficking of images, with little design work to aid viewer understanding.
  • X (formerly Twitter) - Mentioned as a platform where text sharing led to a decline in trust due to AI-generated content.
  • Grog / Sora - Mentioned as tools for manipulating video and photo images.
  • Photography - Discussed as a medium that is interesting due to its creation process and basis in reality, contrasted with AI.
  • Art - Discussed in terms of whether AI can create it, and the role of the human prompt versus the AI output.
  • Analog - Mentioned as an aesthetic trend returning in reaction to AI imagery.
  • Zines - Mentioned as an example of a craft that people engage with.
  • Puppets / Puppeteers - Used as an analogy for human involvement in creation, contrasted with AI.
  • "Is it Cake?" - Referenced as a reality show that highlights the difficulty in discerning real from fake.
  • Essays - Mentioned as a form of writing where AI-generated content can be detected.
  • Human Hand - Emphasized as a valued element in art creation, contrasting with machine-made objects.
  • Machine-Made Objects - Discussed in relation to their value compared to human-made art.
  • Shark in Floodwaters Photograph - Used as an example of a recurring fake image that people tend to forget is not real.
  • Tiktok - Mentioned as a platform where AI content might be encountered.

---
Handpicked links, AI-assisted summaries. Human judgment, machine efficiency.
This content is a personally curated review and synopsis derived from the original podcast episode.