Echo Chambers Are The Only Stable Social Media
Modelling Social Media Ideological Disequilibrium
When Elon Musk bought Twitter in 2022, the justification for doing so was to end censorship. Musk claimed Twitter was unfairly discriminating against views that were right of center, and were working with the government to censor specific stories that were important for the 2020 election (i.e. The Hunter Biden Laptop story). The accusation was that Twitter (as well as many other popular social media sites) had become an ideological echo chamber.
While the left likes to portray this as an exaggeration or a conspiracy theory, it was largely true. Twitter had an overstaffed moderator team that was continually narrowing the bounds of what constituted acceptable speech, and when governments intervened (both in the United States and abroad) to limit speech, there was little or no pushback from the executives at Twitter. Disinformation was imprecisely defined, and therefore could be applied indiscriminately. Over time, the platform had followed the normal degradation of most social media platforms, morphing from the freebooting prosperity of free speech, to a corporate governance structure that censored controversial topics.
Musk changed this, rebranding the platform to X, firing most of the moderation team and restoring many accounts that had been shadow-banned, censored or banned. The general feeling on the rechristened X was that a renaissance of free speech was dawning, and that by removing the arbitrary limits imposed by the previous owners of Twitter, the platform would improve dramatically. In combination with the new feature, community notes, the idea was that when disinformation was being actually spread (as in the case of promoting factually untrue statements) there would be a decentralized, rather than a centralized mechanism for ensuring that it was properly identified, albeit not censored.
At least, that’s the story as I know it. I never got into using Twitter (while it was still Twitter), and I still haven’t managed to get much out of X despite some deliberate efforts to curate a useful and interesting feed. My personal aversion is largely due to what seems like an overabundance of right wing views. I don’t hold any particular bias against Trump, Musk or Cryptocurrency (If you knew me personally you might assume I was in strong support of all three), but when you can’t go three posts without a meme praising one of those three being highly upvoted in the comments, it gets old extremely fast. In the same way that half the posts on Reddit’s popular feed seem to be critical of Trump or Musk, half the posts (or top comments) on X’s feed seems to be in favor of them, which gets annoying when wanting to learn about literally anything else.
I. X Swings Right
In 2023, Musk instituted a poll for reinstating Donald Trump on Twitter. It passed by a narrow margin (similar to Trump’s performance in the 2024 popular vote), indicating that X was generally representative of the population, and was now functioning as the “marketplace of ideas” Musk set as a goal for the Twitter acquisition.
But things have changed in the past couple of years. In 2024, Musk polled X again on Trump, this time asking Who will you vote for? You’d think, as there would certainly be many liberal-minded people who don’t support de-platforming Trump despite not planning to vote for him, that this poll would either be similar to the first, or perhaps slightly less supportive of Trump than the last poll. The internet in general leans left, so this wouldn’t be an unreasonable conclusion. Instead the results were…
Dramatically in favor of Trump. What’s with the sudden change? Some of it can reasonably be attributed to a general shift in favorability for Trump, as was revealed in the recent election, but this was 73%! That’s more than Trump won in the reddest state in the country. It of course will be argued these polls aren’t comparable. Perhaps there was unique press on the first poll that increased anti-Trump turnout, and I’m sure you’d be right. Despite that I find it implausible that this significant swing could be attributed to polling error alone. I think something deeper has been going on with the user base of X since it was taken over by Musk.
One thing of notice is the differing number of votes in both polls. Far fewer people voted in this second poll than the first, and it appears that most of the people who didn’t vote the second poll held strong opinions against Trump. I wonder where they went?
I’m obviously not the only one to recognize this either. While the critique of X has largely come from traditional media and those who are pro-censorship, there have been some unbiased studies on the political sentiment of X users. The result? A clear and significant swing to the right. This isn’t just a swing from the liberal echo chamber that Twitter was into something more moderate either, but a strong swing towards more right leaning views. I think the above poll is a clear indicator that X has become far more right leaning in the past year or two, and the more rigorous analysis by the Pew Research Center has revealed the same thing.
My Motivations For This Analysis
I wholeheartedly agree with Musk’s original stated goal with purchasing Twitter. The world is fundamentally a better place when there is a marketplace of ideas, where good and bad thoughts can compete. The hopeful conclusion of such a thing is that the truth wins popularity, while bad ideas are marginalized or abandoned.
I do not think this is possible when it seems every popular social media site degrades quickly into partisanship of one type or another. Twitter was rightly accused of censorship, and being a left wing echo-chamber prior to rebranding as X, but the attempt to create this free marketplace of ideas has fallen prey to the exact same echo chamber dynamics that motivated the purchase of Twitter in the first place, just in the opposite political direction.
I’m somewhat of an Economist by training, and the first step in problem solving as taught in economics is to accurately describe the situation. Once you understand how things are, only then can you make a suggestion as to how things should be. Otherwise, you’ll be bumbling around in the dark, coming to counterintuitive conclusions that make things much worse than they actually are (I.E. minimum wage, rent control, strict zoning, etc.). I think top-down censorship of social media by the corporate structures owning them is (in part) a misguided attempt at preventing an echo-chamber (and of course less admirably to be profitable, advertiser-friendly and thus non-controversial). Censorship, and algorithm manipulation are some of these counterproductive solutions that I think are born out of a poor understanding of the problem.
In the remainder of this essay I’ll create a simple mathematical model for social media ideological equilibrium (just for fun), setting the groundwork for a later essay where I suggest some potential solutions (besides censorship) for ensuring a stable marketplace of ideas, along with some tests to validate my proposed solutions.
II. Self Reinforcing Echo Chambers
WARNING: Math. Reader discretion is advised.
For most of the people reading this, I think this idea is nothing new. It’s well known that echo-chambers are self-reinforcing, as people will belittle, downvote, report, and otherwise ignore ideologically opposed statements in an environment where one political view is dominant. It’s not pleasant to be disagreed with by hecklers and trolls, leading those who are ideologically opposed to the mainstream view in a particular environment to pick up and leave.
Social media platforms function like marketplaces, but instead of trading goods or services, users trade ideas and conversations. In any marketplace, stability depends on balance—buyers and sellers must coexist. When the content doesn’t meet user expectations, they take their attention elsewhere.
I will be focussing on the specific quality of social media that’s relevant to the censorship issue, political ideology. For a platform to prevent itself from becoming an echo-chamber, this means retaining users from across the political spectrum, both left and right. While ideological divides are fuzzy, and always changing, there is a general understanding of what makes a right or left wing view, and without spending the time precisely defining both I’ll default to that. If you payed even marginal attention to the US 2024 election, you’ll have enough information to understand this theory.
Unlike early forums and social media, which benefited from small, engaged communities, mass social media platforms face unique challenges as they scale. Beyond a certain size, the average quality of contribution drops, moderators are no longer personally involved in the content they are moderating, and content that is infuriating or shocking begins to dominate level-headed conversation. When profit becomes the prime motive (as effectively mandated by law in public companies) content that gets more interaction (rage-bait, more simplistic, short form) will prove more valuable to the company’s bottom line.
So the question is; How do we prevent mass social media like Facebook, Twitter and Reddit from becoming ideological echo chambers?
People have limits to how much ideological opposition they’re willing to tolerate in their social media feeds. When the platform’s overall sentiment begins to lean in one direction, users at the opposite extreme feel alienated and leave. This causes the platform’s ideological “center of gravity” to shift further toward the dominant view. It’s a bit like a ship where passengers on one side jump overboard, causing the ship to tilt—and making it even more uncomfortable for those still on the tilting side.
Without policies designed to counteract this dynamic, platforms naturally drift to extremes. A platform that begins slightly left-leaning will, over time, lose its most right-leaning users. As they leave, the average sentiment becomes more left-leaning, causing moderate-right users to feel out of place and leave as well. This cycle repeats until the platform stabilizes as a leftist echo chamber. The same process occurs in reverse for a rightward drift.
Economists call this kind of process a “positive feedback loop.” Once it begins, it reinforces itself, making the initial shift harder and harder to reverse. A good way to visualize this theory would be a ball on top of a hill. The ball represents the platform's average political sentiment. Once it starts rolling in one direction, it gains momentum, making it harder to reverse course without intervention. Unless the ball is very stable or unless there’s some barriers in place to keep the ball from rolling, it isn’t likely to stay at the top for long.
Economics provides tools to formalize and quantify this process. Here’s how we can model it mathematically. For the first time in my life I have an actual opportunity to use what I learned in my economics degree, but math can be very boring, so feel free to skip this section if you feel you already understand the above concept.
1. Simplification
To keep this analysis focused and accessible, we are modeling a simplified social media platform—similar to early Twitter. This platform has the following characteristics:
No Censorship: There are no content moderators or external controls over what users can post or view.
No Algorithms: Content is presented in a simple chronological, most viewed, or most interacted with feed, ensuring that users see posts from across the platform without algorithmic filtering or prioritization.
Direct Interactions: All users interact with the platform as a whole, rather than within isolated groups or communities (like subreddits on Reddit).
Homogeneous Exposure: Every user has equal exposure to the entire user base, meaning the platform's overall sentiment is representative of the average ideology of its active users.
More complex platforms— such as Reddit, which segments users into specific subreddits, or algorithm-driven platforms like modern X—introduce additional dynamics that are harder to model. For example:
Reddit’s subreddit structure limits interaction to smaller, self-selected communities, reducing the direct impact of the platform-wide average sentiment.
Platforms with algorithms may amplify certain viewpoints or create "bubbles" that influence user behavior independently of the platform’s overall sentiment.
2. Basic Setup
Let the sentiment of our social media platform be an average of its users' (simplified) ideologies. Let each user have an ideology xi, where xi∈[−1,1]
xi=−1: Far-left.
xi=1: Far-right.
xi=0: Centrist.
The platform's overall sentiment at a given time t, written as St, is the average of all user ideologies:
Where:
Nt: Total number of users at time t.
Σ(xi): The sum of all users' ideological positions.
This shows how St shifts as users with xi≠0 leave the platform or change their behavior.
3. Tolerance Thresholds
Users can only tolerate a limited amount of disagreement before leaving the platform. Each user has a tolerance threshold, denoted as Ti which is the maximum ideological distance between their views and the platform's average sentiment that they are willing to tolerate.
A user with ideology xi will leave if the difference between their position and the platform sentiment St exceeds their tolerance Ti :
This means:
If St moves too far from xi, users with low tolerance Ti leave first.
For example, if St shifts toward 0.5 (more right-leaning), far-left users (xi≈−1) will leave because their tolerance Ti is exceeded.
4. User Departures and Feedback
When users leave, the platform's sentiment St recalculates based on the remaining users. If users on one side of the spectrum (e.g., left-leaning users) are leaving disproportionately, St shifts further toward the other side (e.g., the right).
The new average sentiment St+0.01 is given by:
Here:
R : The set of remaining users after those who couldn’t tolerate the shift have left.
|R|: The number of users remaining.
This feedback loop causes an accelerating shift in St :
When St shifts slightly to the right, more left-leaning users leave, further shifting St to the right.
5. Positive Feedback and Exponential Drift
The process creates a positive feedback loop, where small shifts in St lead to even larger shifts over time. Mathematically, the rate of change in sentiment can be expressed as:
Here:
d(St)/dt : The rate of change in St over time.
α: A constant representing the strength of the feedback loop.
The solution to this equation shows exponential drift:
S0 : The starting sentiment.
exp(α∗ t) : An exponential function showing how the drift accelerates over time.
This equation means that even small initial biases (S0≠0) grow rapidly unless counteracted.
6. Stable Extremes
Over time, the platform's sentiment will converge to one of two extremes:
This occurs because:
Users on the "losing" side leave the platform entirely.
The platform stabilizes when only users aligned with the dominant sentiment remain.
7. Bringing It All Together
I claim that social media platforms are inherently ideologically unstable when trying to maintain a marketplace of ideas. The smallest bias in the platform's sentiment (St) leads to a cascade of departures among users who oppose the dominant ideology. This feedback loop pushes the platform to one extreme, where it becomes an ideological echo chamber.
To prevent this natural drift, platforms would need to take active measures to either manually deal with over-representation of a single ideology, or create negative feedback loop mechanisms more powerful than the positive feedback loop described above. I will explore some of these mechanisms and their track records in a later essay.
There’s a few caveats to my above equations.
Nonlinear Dynamics:
Exponential growth probably oversimplifies the dynamics. In reality, the rate of user departure may not remain strictly proportional to St, especially as the platform approaches an extreme where fewer users remain to leave.
Logistic growth (with a carrying capacity) or piecewise-linear models likely better capture the dynamics when user departure slows as St nears an extreme. Even so, exponential growth accurately represents the accelerating curve of the initial part of a logistic curve.
To model the growth using logistic growth, we can replace the exponential equation to reflect a system where the growth rate slows as the platform approaches ideological saturation (e.g., St→1 or St→−1). Logistic growth is commonly used to represent processes that start exponentially but stabilize over time due to limiting factors.
The logistic growth model (to replace the exponential growth model in part 5) can be written as:
St : The sentiment at time t.
S∞ : The maximum sentiment the platform can reach (e.g., 1 for fully right-leaning or -1 for fully left-leaning).
α : The growth rate, determining how quickly the platform shifts toward the extreme.
t0: The inflection point (time) where the sentiment is changing most rapidly (e.g., St = S∞/2).
III. So What Happened to Twitter?
I am not of the opinion that it was Musk’s intention to turn Twitter into a conservative echo chamber (although I don’t think he minds now that it’s become one), but rather to approach a more centrist, perhaps center-right mean where both left and right leaning views are represented, respectful disagreement is common, and the marketplace of ideas functions to reveal the best, and most desirable paths for the United States, The West, and The World. I identify the problem with the swing from a left-leaning Twitter to a centrist X was its momentum. Perhaps in 2022, soon after he purchased Twitter there was this centrist, generally representative user body, as revealed by the polls that slightly favored reinstating Trump (going by the 2024 election this is approximately representative of the average sentiment), but the missing piece was the ideological momentum.
The rapid shift in average sentiment created a sort of shock therapy for many leftist Twitter users (and many right-wing non-users being attracted) that led the centrist shift to continue beyond a representative user base, to a right-leaning user base. Sentiment overshot to the right and then the self reinforcing forces I described above went into play, now in favor or right-wing views. In our ball analogy, this would entail throwing the ball up the hill and having it roll down the other side.
The end result is clear, and it’s that X has become the right wing echo chamber, not the centrist marketplace of ideas that Musk originally claimed to desire. Along with this shift in the platforms sentiment, Musk has personally shifted his views in favor of Trump as well. While this has been going on for a lot longer than the past few years, it’s not at all surprising that the views of the “people” as Musk sees them, helped to reinforce his own views (and therefore, again, the platform’s).
So what does this all mean?
Essentially, I’m putting forward an (unoriginal) theory that the marketplace of ideas promoted by free-speech activists does not, and perhaps can not exist without some intentional effort to keep it that way. Absent a stabilizing force, any specific social media platform will eventually drift towards one political extreme, alienating the other, and leading to an undesirable echo chamber.
This issue is important to me because I actually quite like interacting with people who hold competing views. I think Substack is currently a uniquely successful place to post and comment, as you’re more likely than anywhere else to get intelligent, good-faith responses. When Scott Alexander and Walt Bismark can have a spontaneous discussion and conclude they actually agree on a lot of things, I think that’s the marketplace of ideas functioning as-intended. As an experiment, I posted a steelmanned note that we should institute censorship on Substack, and it quickly got a lot (for Substack) of both positive and negative interactions. Absent a superior alternative to censorship, I fear this platform will go the route of Twitter, Reddit and the internet as a whole.
All hope is not lost. While censorship is the obvious (and lazy) solution, that doesn’t mean there can’t be less intrusive methods of stabilization that prevent ideological concentration, while allowing space for all views (including those I would say are obviously wrong) a space to compete. I’m trying to keep these essays shorter (so I can write them in a single sitting), and will be exploring some of these non-intrusive stabilizing forces later on. If that’s something that interests you, I encourage you to subscribe and stay-tuned.
The math is funny. Do you think anyone will check it? I don’t think this is provable mathematically nor does it need to be and you would really need more actual data to attempt it trying to granularly go through the evolution of social media and who comes and goes and why, some polling would be necessary. I don’t disagree with many of the premises.
In any case, I think a huge factor in the big mess that is social media is the decline of civility. Before you say DUH, the precursors to social media generally started out with the ability to be anonymous which I think a majority availed themselves of. Obviously many many accounts online are fake or anonymized but I think my biggest shell shock because you DID see the social media bias drift problem there too (and sometimes outright bias- owing to moderator cliques) in small groups that were intended to be “open minded” was when I started seeing people post pretty horrible stuff with their name and have a little cheering section. What’s pretty horrible stuff objectively? Wishing someone who isn’t Hitler or Osama (despite apparently everyone being Hitler and Osama) some excruciating death and there isn’t any comedic macabre element it’s just mean spirited. This predated cancel culture so no one seemed concerned their employer might see it and can them. Obviously that’s not a desirable correction but I think people not being embarrassed to put their name on impulsive and debased words (not to be confused with the horrible but sometimes light and unserious way in which friend groups make extreme comments among each other) and people not being even more embarrassed to cheer it on has made social media a sinkhole way more than misinformation has. The misinformation is secondary to how people analyze what they take in and how they choose to behave in response. There is no solution with AI because it’s still people trained and no one wants it to be trained by the masses banking on a worse result(though I’m not so sure if certain rules were in place but it’s a big if) and what seems to invariably happen with moderators is what you see in the universities- a small activist minority hijacks the enterprise. Me personally I would really try to screen out activists of any stripe from moderation but that’s not an easy task and chances are eventually you’ll get activists- there are certain reasons why it’s even more prevalent with unpaid moderators- who has the time? I’m just happy my Twitter is mostly animal videos at this point though I noticed my google feed keeps dishing really sad animal story clickbait and I need to not click. And I used to mock people who watched animal videos. On the whole though I strongly recommend short curated animal videos. Since they’re typically centered around silly antics, intelligence, communication, and exorbitant affection it’s a nice reminder of the better parts of who we are.