A post on Twitter’s blog reveals that Twitter’s algorithm promotes right-leaning content more often than left — but the reasons for that remain unclear. The findings drew from an internal study on Twitter’s algorithmic amplification of political content.
During the study, Twitter looked at millions of tweets posted between April 1st and August 15th, 2020. These tweets were from news outlets and elected officials in Canada, France, Germany, Japan, Spain, the UK, and the US. In all countries studied, except Germany, Twitter found that right-leaning accounts “receive more algorithmic amplification than the political left.” It also discovered that right-leaning content from news outlets benefit from the same bias.
Twitter says that it doesn’t know why the data suggests its algorithm favors right-leaning content, noting that it’s “a significantly more difficult question to answer as it is a product of the interactions between people and the platform.” However, it may not be a problem with Twitter’s algorithm specifically — Steve Rathje, a Ph.D. candidate who studies social media, published the results of his research that explains how divisive content about political outgroups is more likely to go viral.
The Verge reached out to Rathje to get his thoughts about Twitter’s findings. “In our study, we also were interested in what kind of content is amplified on social media and found a consistent trend: negative posts about political outgroups tend to receive much more engagement on Facebook and Twitter,” Rathje stated. “In other words, if a Democrat is negative about a Republican (or vice versa), this kind of content will usually receive more engagement.”
If we take Rathje’s research into account, this could mean that right-leaning posts on Twitter successfully spark more outrage, resulting in amplification. Perhaps Twitter’s algorithm issue is tied to promoting toxic tweeting more than a specific political bias. And as we mentioned earlier, Twitter’s research said that Germany was the only country that didn’t experience the right-leaning algorithm bias. It could be related to Germany’s agreement with Facebook, Twitter, and Google to remove hate speech within 24 hours. Some users even change their country to Germany on Twitter to prevent Nazi imagery from appearing on the platform.
Twitter has been trying to change the way we Tweet for a while now. In 2020, Twitter began testing a feature that warns users when they’re about to post a rude reply, and just this year, Twitter started piloting a message that appears when it thinks you’re getting into a heated Twitter fight. These are signs of how much Twitter already knows about problems with bullying and hateful posts on the platform.
Frances Haugen, the whistleblower who leaked a number of internal documents from Facebook, claims that Facebook’s algorithm favors hate speech and divisive content. Twitter could easily be in the same position but is openly sharing some of the internal data examinations before there’s a possibility of a leak.
Rathje pointed out another study that found moral outrage amplified viral posts from both liberal and conservative viewpoints but was more successful coming from conservatives. He says that when it comes to features like the algorithmic promotion that lead to social media virality, “further research should be done to examine whether these features help explain the amplification of right-wing content on Twitter.” If the platform digs into the problem further and opens up access to other researchers, it might get a better handle on the divisive content at the heart of this issue.
This article was originally posted on theverge.com. Read here