
Martyn Rhisiart Jones
Madrid, Sunday 8th March 2026
The Algorithmic Silence: Why Serious Writing Disappears on Social Media
For much of the early internet era, there was a widespread belief that digital platforms would democratise public discourse. Anyone with a good idea, a well-written argument, or a compelling piece of criticism could reach an audience. Gatekeepers would fall away. Thoughtful debate would flourish.
Two decades later, that promise looks increasingly hollow.
Across the major social platforms, X (formerly Twitter), Facebook, LinkedIn, and the newer decentralised network Bluesky, serious writing routinely disappears into the algorithmic void. Long-form commentary, essays, and book reviews are quietly buried, downranked, or rendered effectively invisible.
Meanwhile, the most extreme and emotionally charged material spreads with remarkable speed.
This is not a conspiracy. It is the logical consequence of how these platforms are built.
Engagement Is the Real Editor
Every major social media company now relies on engagement-driven algorithms to determine what users see. Posts that generate reactions, likes, shares, comments, outrage, rise in the rankings. Posts that do not are pushed downward.
This system favors immediacy over reflection. Content designed to provoke anger, tribal loyalty, or moral panic travels faster than careful analysis. A meme or slogan can generate thousands of reactions within minutes. A thoughtful essay or book review might require ten minutes of reading before a reader even considers responding.
The algorithm does not reward patience. It rewards impulse.
As a result, serious writing struggles to survive in an environment optimised for emotional velocity.
The Quiet Mechanism of Suppression
Contrary to popular rhetoric, most content is not removed outright. Instead, it is downranked.
Downranking is the invisible lever of modern social media governance. A post remains technically visible but is excluded from recommendation systems, trending lists, and algorithmic amplification. Its reach shrinks dramatically without the author ever being notified.
The effect is subtle but powerful. Writers can publish thoughtful commentary and watch it vanish into near-zero visibility while sensational or inflammatory posts dominate the feed.
This is not censorship in the traditional sense. It is something arguably more insidious: algorithmic indifference.
The Extremism Incentive
Engagement algorithms also produce a darker consequence. Because outrage and fear generate stronger reactions than reasoned discussion, extremist rhetoric often spreads more effectively than moderate argument.
Platforms insist they do not promote extremism. Technically this is true. The algorithms simply promote content that provokes engagement.
But when engagement becomes the dominant metric, extremism becomes a highly efficient strategy.
This dynamic has been visible across multiple political contexts in recent years. Highly polarised narratives, conspiracy theories, and ideological slogans spread rapidly because they trigger emotional responses. Nuanced criticism or thoughtful cultural commentary rarely travels as far.
The architecture of the feed quietly rewards the loudest voices.
Private Platforms, Public Consequences
The deeper problem is institutional.
A handful of private corporations now control the primary channels through which billions of people encounter news, culture, and political debate. Decisions about visibility are made not by editors accountable to readers but by opaque algorithms optimised for advertising revenue.
These companies insist they are neutral technology providers. In practice they function as unaccountable curators of the modern public sphere.
Traditional publishers exercised editorial judgment openly. Readers could see the choices being made. Social media platforms exercise similar influence through machine learning systems that are largely invisible to the public.
When serious writing disappears, there is no editor to question and no appeals process that reliably restores visibility.
The Marginalisation of Intellectual Culture
The casualties of this system are not only individual writers but entire forms of cultural expression.
Book reviews, essays, literary criticism, and analytical commentary require time and attention. They are built on sustained arguments rather than instant reactions.
In the algorithmic economy of attention, such work becomes economically and technologically disadvantaged.
A thoughtful review of a novel or a critique of a new technology might once have circulated widely through newspapers or magazines. On modern social feeds it can easily vanish beneath a cascade of viral posts, memes, and outrage-driven commentary.
The public conversation becomes faster, louder, and shallower.
The Illusion of Digital Democracy
Social media platforms continue to present themselves as engines of democratic expression. In reality they operate something closer to attention markets, where ideas compete not on merit but on their ability to provoke emotional engagement.
This is not necessarily the result of malicious intent. It is the outcome of business models built around advertising and user attention.
But the consequences are profound.
When algorithms reward outrage more than argument, extremism becomes easier to spread than thoughtful critique. When long-form writing is systematically disadvantaged, intellectual culture struggles to reach audiences. And when the architecture of public conversation is controlled by private companies, democratic accountability becomes difficult to enforce.
The early internet promised a flourishing of ideas.
What we have instead is an information ecosystem where the quiet voice of serious thought often struggles to be heard above the algorithmic roar.
Many thanks for reading.
😺 Click for the last 100 Good Strat articles 😺
Discover more from GOOD STRATEGY
Subscribe to get the latest posts sent to your email.