The Bias Stronger Than the Will. X's Conservative Algorithm Bias Predated Elon Musk's Takeover

The *Nature* Study: Protocol and Results
The study, published in February 2026 in Nature, used a rigorous experimental methodology. The researchers recruited 1,256 X users in the United States and randomly divided them into two groups. The first group used the algorithmic "For You" feed for four weeks. The second group used a chronological feed, without algorithmic intervention. Participants answered questionnaires about their political opinions before and after the experiment.
The results show a statistically significant shift in political opinions toward more conservative positions in the group exposed to the algorithmic feed. The effect is measured on a scale of 0 to 10, with an average shift of 0.4 points to the right. This effect persists four weeks after the end of the experiment, suggesting that the opinion change is not temporary.
---
A Bias Documented Before Musk
What distinguishes this study from previous work is the historical perspective. In 2021, Twitter itself had published an internal study acknowledging that its algorithm amplified right-wing political content more than left-wing content. This study had been published as part of a transparency effort, but its conclusions had not led to substantial modifications of the algorithm.
The Nature study confirms that this bias is still present in 2025, two years after the platform's acquisition by Elon Musk. This suggests that the bias is not the product of an editorial decision by Musk, but is embedded in the very architecture of the recommendation system.
---
The Mechanisms of Bias
How does the algorithm produce this bias? The researchers identify several mechanisms. First, the algorithm optimizes for engagement — that is, for content that generates reactions (likes, retweets, comments). Politically divisive content, and particularly conservative content on X, statistically generates more engagement than progressive content.
Second, the algorithm amplifies content that is already popular, creating a snowball effect. If conservative accounts have more followers and generate more interactions, their content will be recommended more, allowing them to reach even more users.
Third, the study shows that 76% of users remain on the algorithmic feed, even when a chronological alternative is available. This suggests that the mere existence of an alternative is not enough to change behavior.
---
Implications for Regulation
These results fuel reflection on the responsibility of platforms in shaping public opinion. The European Union, with the Digital Services Act (DSA), imposes transparency obligations on major platforms. Article 38 of the DSA specifically requires very large platforms to offer at least one non-algorithmic feed option. X is subject to this obligation.
In the United States, the question remains largely open. No federal legislation specifically governs recommendation algorithms. Several bills have been introduced in Congress, including the Filter Bubble Transparency Act, which would require platforms to offer a non-algorithmic feed by default. But none has been adopted to date.
If an algorithm measurably and persistently changes the political opinions of its users, what responsibility falls on the platform that deploys it? That is the central question this study raises.


