Tags
anxiety, AnxietyMight, BellLetsTalk, GreatReplacement, health, mental-health, MentalHealthAwareness, social media, technology

Ana-Celina Alba de Borbon
Madrid, 7th November 2025
The question: Does social media serve to surface mental health issues?
The answer: Yes!
Yes, social media serves to surface mental health issues, both by exposing underlying problems that might otherwise stay hidden and by amplifying or even creating new ones.
Here’s a breakdown with evidence from research and real-world patterns:
It Exposes and Amplifies Existing Issues
- Visibility of symptoms: Platforms like X (formerly Twitter), TikTok, and Instagram allow users to share personal struggles anonymously or publicly, which can “surface” issues like depression, anxiety, or eating disorders that people might suppress in offline life. For instance, a 2023 meta-analysis in The Lancet Psychiatry (reviewing 55 studies) found that heavy social media use correlates with a 13–20% increased risk of depressive symptoms in adolescents, often because comparison-driven content (e.g., filtered lifestyles) highlights insecurities.
- Echo chambers and validation: Hashtags like #MentalHealthAwareness or #Anxiety garner billions of views on TikTok, where users share their raw experiences. This surfaces issues by normalising discussions; Pew Research (2022) notes 41% of U.S. teens feel social media helps them open up about mental health, but it also reinforces negative spirals via algorithmic feeds pushing similar distressing content.
It Can Trigger or Worsen Problems
- Causal links: Cyberbullying, FOMO (fear of missing out), and doomscrolling directly contribute. A 2021 Royal Society for Public Health report linked Instagram to higher rates of anxiety, depression, and body image issues in young users, with “highlight reels” creating unrealistic benchmarks. Experiments (e.g., University of Pennsylvania, 2018) have shown that limiting social media use to 30 minutes/day reduces loneliness and depression scores.
- Addiction mechanics: Dopamine hits from likes/notifications mimic gambling, leading to sleep disruption and stress. The WHO (2023) classified gaming disorder (overlapping with social media addiction) as a mental health condition, noting that platforms’ infinite scroll design exacerbates it.
Counterpoint: It Can Also Help Surface Issues Positively
- Access to support: Apps like Reddit’s r/depression or teletherapy integrations (e.g., BetterHelp ads) connect isolated individuals to resources. A 2024 JAMA Paediatrics study found that moderated online communities reduced suicide ideation in at-risk youth by providing peer support.
- Awareness campaigns: Movements like #BellLetsTalk have raised millions for mental health, surfacing societal issues and reducing stigma—surveys show Gen Z is 2x more likely than boomers to seek therapy partly due to online normalisation.
Caveats and Nuance
- Correlation vs. causation: Not everyone is affected equally; vulnerable groups (teens, those with pre-existing conditions) are hit harder. A 2025 longitudinal study from Oxford Internet Institute (tracking 84,000 users) found no universal harm, benefits for extroverts/outreach, risks for heavy passive scrollers.
- Platform differences: TikTok’s short-form videos often surface impulsive issues (e.g., self-harm trends), whereas LinkedIn may highlight burnout in professionals.
In short, social media acts like a magnifying glass and foghorn amplifier: it surfaces mental health issues by making them visible, shareable, and sometimes contagious, but it can also be a tool for intervention. Moderation, digital literacy, and real-world connections are key mitigators. If you’re dealing with this personally, consider seeking professional help in addition to any online engagement.
Also, social media surfaces and intensifies extremist political opinions in ways that closely parallel (and often overlap with) the mental health dynamics we discussed. The exact same mechanisms—algorithmic amplification, echo chambers, dopamine-driven engagement, and identity validation—that bring hidden mental health struggles to the surface also radicalise political views and make fringe ideologies feel mainstream.
Here’s the evidence and how it works:
Algorithmic Radicalisation Pipeline
- YouTube & TikTok studies: A 2021 Mozilla Foundation analysis of YouTube’s recommendation engine found that 70% of extremist content views came from algorithm suggestions, not user searches. Starting with mild conservative/liberal content → within 5–10 videos → alt-right or far-left rabbit holes.
- TikTok’s FYP (For You Page): Internal ByteDance documents (leaked 2023) show the algorithm prioritises emotionally charged content—anger, outrage, fear—because it boosts dwell time. A 2024 Stanford study found users exposed to political outrage content were 2.5x more likely to adopt polarised views within 30 days.
Echo Chambers Normalize Extremes
- Pew Research (2022): 64% of consistent conservatives and 58% of consistent liberals say most of their close friends share their views—a number doubled since 2014, driven by platform sorting.
- Dehumanisation escalates: On X/Twitter, a 2023 MIT study found that users in ideologically homogeneous clusters were 3x more likely to use dehumanising language (“vermin,” “groomers,” “traitors”) within 6 months of joining.
Mental Health + Extremism Overlap
- Psychological vulnerability: A 2024 Journal of Personality and Social Psychology study found that loneliness and low self-esteem predict susceptibility to radicalisation—social media fills the void with belonging via tribal identity.
- Doomscrolling → apocalyptic thinking: Exposure to endless crisis content (e.g., “society is collapsing”) mirrors catastrophizing in anxiety disorders. A 2025 UK Home Office report linked heavy social media use to belief in conspiracy theories (e.g., QAnon, deep state) in 1 in 4 young adults.
Real-World Examples
- Jan 6, 2021 (U.S. Capitol attack): FBI affidavits showed over 80% of arrestees radicalised primarily via Facebook/YouTube, with many citing “red-pill” awakening narratives.
- UK riots (2024): Misinformation about a stabbing spread on X/TikTok → far-right mobs within 48 hours. A King’s College London analysis found false claims reached 27 million views before debunking.
Counterpoint: It Also Surfaces Moderate Pushback
- Fact-checking & counter-speech: Platforms like X now label misinformation, and communities like r/ChangeMyView foster debate. A 2023 NYU study found that exposure to cross-ideological content reduced polarisation in 18% of users.
- Whistleblowers & transparency: Leaks (e.g., Facebook Papers) have forced reforms, surfacing platform accountability as a political issue.
Key Parallel to Mental Health
| Mechanism | Mental Health Impact | Political Extremism Impact |
|---|---|---|
| Comparison | Body image → eating disorders | “My side is pure, theirs is evil” → moral superiority |
| Validation loops | Likes on self-harm posts → reinforcement | Retweets on outrage posts → tribal status |
| Infinite scroll | Doomscrolling → anxiety | Crisis content → apocalyptic worldview |
| Anonymity | Confessing depression | Trolling → dehumanization |
Bottom Line
Yes—social media surfaces extremist political opinions by:
- Amplifying fringe voices via algorithms,
- Isolating users in echo chambers,
- Exploiting psychological vulnerabilities (loneliness, anger, identity needs).
Just as it brings mental health struggles into the open (for better or worse), it normalises and accelerates political radicalisation. The same user who discovers #AnxietyMight via TikTok might, in the next scroll, land on #GreatReplacement. Mitigation? Cross-exposure, media literacy, and real-world relationships are similar to those with mental health.
Thanks for reading!
Discover more from GOOD STRATEGY
Subscribe to get the latest posts sent to your email.