Social Media Algorithms Amplify User Behavior, Eroding Agency and Discourse
TL;DR
- Social media algorithms exploit human insecurities and desires for belonging, transforming curiosity into comparison and outrage into entertainment, which erodes user agency and mental well-being.
- Algorithmic amplification of emotional and divisive content, driven by engagement metrics, leads to echo chambers and polarization, as seen in studies showing faster spread of false news.
- The design of social media platforms, including features like autoplay and infinite scroll, intentionally creates friction-less engagement loops that extend user session times and foster addictive behavior.
- User clicks and engagement patterns are the primary data source for algorithms, meaning users actively co-create their feeds, inadvertently reinforcing biases and limiting exposure to alternative viewpoints.
- Even without algorithms, human social dynamics on platforms tend towards echo chambers and amplification of extreme voices, suggesting social media's structure itself may be inherently challenging to healthy discourse.
- Platforms can mitigate negative impacts by introducing friction before sharing, offering chronological feeds by default, and increasing algorithmic transparency through independent audits.
- Individual agency can reclaim control by consciously curating feeds, diversifying content sources, limiting phone usage first thing in the morning, and practicing presence with positive experiences.
Deep Dive
Social media algorithms, rather than being intelligent manipulators, are sophisticated feedback loops that amplify user behavior, leading to detrimental effects on individual well-being and societal discourse. The core argument is that while algorithms are designed to maximize engagement by serving content users are likely to interact with, this system inadvertently exploits human vulnerabilities like insecurity and outrage, creating a cycle that trains both the user and the algorithm. The second-order implication is that this dynamic erodes personal agency and can lead to increased anxiety, comparison, and polarization, suggesting that the problem is not solely technological but deeply rooted in human nature and the economic incentives of social media platforms.
The system functions by observing user behavior--every pause, click, like, and share--to predict and serve more of what keeps users engaged. This predictive power, combined with amplification of high-engagement content, particularly emotional content like outrage, trains users to produce more divisive or insecure content to gain attention. For instance, research indicates that false news spreads significantly faster than true news because shocking content elicits more clicks and shares, which the algorithm then promotes further. Similarly, studies show that posts containing negative emotional words are shared more frequently, indicating that outrage itself becomes a form of social currency that platforms exploit. This creates a feedback loop where users are nudged into addictive scrolling through features like infinite scroll and autoplay, encouraged to produce outrage for social rewards, and pushed towards extremist or conspiratorial content by recommendation engines.
The implications of this cycle extend beyond individual screen time, affecting how users perceive themselves and others. For example, 56% of girls feel they cannot live up to the beauty standards presented on social media, and men become more exposed to misogynistic content, leading to increased insecurity, anxiety, loneliness, and disconnection for both genders. This erosion of agency means users are not consciously choosing their content but are being subtly guided, often towards content that reinforces existing biases or amplifies negative emotions. Even attempts to create neutral social networks without algorithmic incentives have shown that users, driven by their own nature, tend to gravitate towards echo chambers and partisan content, suggesting that human tendencies like negativity bias, the need for group belonging, and cognitive efficiency play a significant role in fueling the algorithm's amplification.
Ultimately, while platforms have an incentive to keep users addicted due to economic pressures, the core issue lies in the interplay between algorithmic design and human psychology. The solution requires a dual approach: social media companies must introduce friction before sharing, offer chronological feeds, and provide algorithmic transparency, while individuals must cultivate emotional mastery, critical thinking, and conscious engagement to counteract the system. The ability to hold opposing ideas--that the situation is challenging yet change is possible--is crucial for reclaiming agency and fostering healthier digital interactions.
Action Items
- Audit social media engagement: Track 5-10 content types (e.g., outrage, comparison, positive) to measure their impact on user session duration.
- Implement "read before share" friction: Add a 5-second minimum hover time or mandatory full article read before sharing 3-5 types of content.
- Create chronological feed option: Offer a default chronological feed setting for users, alongside algorithmic options, to reduce polarization.
- Design algorithmic transparency: Publish 3-5 key factors influencing content recommendations and allow independent audits of their impact.
- Measure feed diversity: For 3-5 users, track the percentage of content from outside their usual bubble over a 2-week period.
Key Quotes
"The algorithm isn't as smart as we think it is but the deeper I went into my research the more I realized something unsettling it's stronger than me stronger than you stronger than all of us because it knows our weaknesses but here's what I also found even the strongest system has a glitch the algorithm doesn't just know us it depends on us and if we learn how it feeds we can decide whether to starve it or steer it."
Jay Shetty explains that while algorithms may not be inherently intelligent, they are powerful because they exploit human vulnerabilities. He emphasizes that this dependence on user behavior creates a potential "glitch" or point of leverage, suggesting that understanding how algorithms are fed allows individuals to regain control.
"Picture this, it's midnight. Think of a girl named Amelia lies in bed, phone in her hand. She posts a photo, nothing dramatic, just hoping someone notices. The likes trickle in, her friends comment. She taps on another girl's profile: prettier, thinner, more followers. She lingers, she clicks, she scrolls. The algorithm pays attention. The next night, her feed feels different: more flawless faces, more filters, more diets, more lives that look nothing like hers. Curiosity turns into comparison, comparison turns into obsession, and soon every scroll feels like it's whispering the same three words: 'You're not enough.'"
Jay Shetty illustrates how social media algorithms can amplify insecurities by curating feeds that promote unattainable standards. He shows how this process can transform initial curiosity into comparison and ultimately obsession, leading to feelings of inadequacy.
"The algorithm will do anything to keep us glued there is a huge incentive issue for the algorithm because in one study where they chose not to show toxic posts users spent approximately 9% less time daily, experienced fewer ad impressions, and generated fewer ad clicks. The algorithm's goal is not to make us polarized, it's not to make us happy, it's to make us addicted and glued to our screens."
Jay Shetty highlights the profit-driven nature of social media algorithms, explaining that their primary objective is to maximize user engagement, even if it means promoting addictive or toxic content. He points out that reducing engagement, as seen in studies that limited toxic posts, directly impacts platform revenue.
"The danger isn't that we have no choice, it's that we don't notice when our choices are being shaped for us. So let's do a thought experiment: why don't we create a social media platform without these incentives, one that doesn't play these games with us? They already tried that, and what I'm about to share with you shocked me the most."
Jay Shetty introduces a critical perspective on user agency, suggesting that the real danger lies in the subtle manipulation of choices by algorithms rather than an outright lack of choice. He sets up a surprising revelation about attempts to create alternative platforms.
"A new study out of the University of Amsterdam tested this by creating a stripped-down social network, no ads, no recommendation algorithms, no invisible hand pushing content. Researchers released 500 AI chatbots onto the platform, each powered by OpenAI, and gave them distinct political and social identities. Then they let them loose across five separate experiments, amounting to 10,000 interactions. The bots began to behave exactly like us: they followed those who thought like them, they reposted the loudest, most extreme voices, they gravitated into echo chambers, not because an algorithm pushed them there, but because that's where they chose to go."
Jay Shetty presents findings from a University of Amsterdam study that simulated a social network without algorithmic manipulation. He reveals that even AI chatbots, when given social identities, naturally gravitated towards echo chambers and extreme voices, suggesting that human behavior itself, not just algorithms, contributes to polarization.
"Here's the good news: algorithms do not fully decide your fate. They're predictive, not deterministic. They rely on your past clicks, but you can override them by searching, subscribing to diverse sources, and consciously engaging with content outside of your bubble. So I want you to take a look at a new account I started on the 'for you page'..."
Jay Shetty offers a message of hope, asserting that algorithms are not absolute determinants of our experience. He explains that while they use past behavior to predict future engagement, users retain the agency to actively shape their feeds by seeking out diverse content and making conscious choices.
Resources
External Resources
Books
- "The Social Dilemma" - Mentioned in relation to the impact of social media algorithms on user behavior.
Research & Studies
- UCL Kent study (2024) - Referenced for findings on TikTok showing four times more misogynistic content on the For You Page within five days of casual scrolling.
- Mozilla's YouTube Regrets Project (2020) - Discussed as evidence that users were steered toward extremist or conspiratorial content, with 71% of regretted videos being recommendations.
- Yale research - Cited for findings that people posting moral outrage online receive likes and retweets, encouraging more such posts.
- Facebook studies - Mentioned to show that users clicked links confirming their biases more often than opposing ones, with liberals choosing cross-cutting news 21% of the time and conservatives 30%.
- Study on disabling autoplay - Referenced for showing a 17-minute shorter average session when autoplay was disabled, indicating its measurability in extending watch time.
- Study on algorithmic model - Referenced for findings on TikTok showing four times more misogynistic content on the For You Page within five days of casual scrolling.
- Study on toxic posts - Mentioned for showing that when toxic posts were not shown, users spent approximately 9% less time daily, experienced fewer ad impressions, and generated fewer ad clicks.
- Study on AI chatbots on a social network - Referenced for findings that bots gravitated into echo chambers and followed those who thought like them, not due to algorithms but user choice.
- Study on partisan engagement - Mentioned for findings that the most researchers could manage was a 6% reduction in partisan engagement, and in some cases, hiding user bios sharpened the divide.
Articles & Papers
- "The Social Dilemma" (Documentary) - Mentioned in relation to the impact of social media algorithms on user behavior.
People
- Jay Shetty - Host of the podcast "On Purpose."
- F. Scott Fitzgerald - Quoted regarding the ability to hold two opposed ideas simultaneously.
Organizations & Institutions
- American Public University (APU) - Mentioned as an institution with flexible online master's programs for working individuals.
- State Farm - Referenced for its personal price plan allowing bundling of home and auto insurance.
- Give Directly - Mentioned as a partner for the "On Purpose" podcast's "Fight Poverty" initiative.
- Chase Sapphire Reserve - Promoted as a credit card offering rewards for travel and airport lounge access.
- T-Mobile - Mentioned as the network provider for Noble Mobile.
- EU's Digital Services Act - Referenced as an example of legislation moving towards requiring large platforms to open their algorithms to scrutiny.
Websites & Online Resources
- give directly dot org forward slash on purpose - Provided as the URL for donating to the "Fight Poverty" initiative.
- avocado green mattress dot com - Provided as the URL for checking out Avocado Green Mattress's sale.
- chase com sapphirereserve - Provided as the URL to discover more about Chase Sapphire Reserve.
- noblemobile com j - Provided as the URL to switch to Noble Mobile and receive a bonus.
- news.jayshetty.me/subscribe - Provided as the URL to subscribe to Jay Shetty's newsletter.
- omnystudio.com/listener - Provided for privacy information.
- apu apus edu - Provided as the URL to learn more about APU's master's degrees and certificates.
Other Resources
- Algorithm - Discussed as a system that predicts and amplifies content based on user engagement, often exploiting emotional responses and weaknesses.
- Doom-scrolling - Identified as a behavior that increases cortisol, anxiety, and learned helplessness, reinforcing a sense of doom.
- Chronological feeds - Presented as a solution to combat algorithmic influence, offering a default setting that reduces polarization and misinformation exposure.
- Friction before sharing - Proposed as a method to add pauses or requirements (like reading an article fully) before users can share content, slowing down misinformation.
- Algorithmic transparency and independent audits - Suggested as a solution for companies to publish how their recommendation systems prioritize content and allow external researchers to study impacts.
- Negativity bias - Explained as an evolutionary tendency to notice threats more than opportunities, making negative content more engaging.
- Outrage as social currency - Described as a way to signal loyalty to a group, making outrage a form of identity signaling.
- Cognitive efficiency - Mentioned as a reason negative content is often simpler and more digestible, leading to easier processing by the brain.
- Comparison - Identified as a fundamental human instinct that social media exploits, leading to envy and self-doubt.
- Envy - Described as the emotional fuel exploited by social media algorithms, turning it into an economy.
- Frankenstein - Used as an analogy for systems built by humans that end up reflecting parts of their creators.
- For You Page (FYP) - Discussed as a personalized feed on platforms like TikTok that can be retrained by user engagement.