Twitter said on Thursday that it would maintain some of the changes it had made to slow down the spread of election misinformation, saying they were working as intended.
Before Election Day, Twitter, Facebook and other social networks had announced a cascade of measures billed as protecting the integrity of the voting process.
For Twitter, those included more prominent warning labels on misleading or disputed claims and limiting how such claims can be shared.
Twitter said on Thursday that between Oct. 27 and Nov. 11, it had labeled about 300,000 tweets as containing "disputed and potentially misleading" information about the election. That represented 0.2% of all tweets related to the U.S. election in that time frame. However, the company declined to say how that compared with the volume of tweets labeled before Oct. 27.
Of those 300,000 tweets, Twitter hid almost 500 behind warnings that users had to click past to read. In order to reply to those tweets or share them, users had to add their own comments — a requirement intended to give people pause. Finally, Twitter removed those tweets from recommendation by its algorithms. In all, 74% of users who saw labeled tweets did so after the labels were applied.
"These enforcement actions remain part of our continued strategy to add context and limit the spread of misleading information about election processes around the world on Twitter," Twitter officials Vijaya Gadde and Kayvon Beykpour wrote in a blog post on Thursday.
Perhaps the most noticeable impact was on President Trump's account. Twitter hid more than a dozen of his tweets and retweets behind warnings between Election Day and Nov. 7, when major media outlets called the election for former Vice President Joe Biden. The platform has stopped using the more aggressive labels since then but has continued to put notices on many of Trump's tweets in which he makes unsupported claims of voter fraud.
Still, false claims and conspiracy theories continue to circulate online.
That has left experts who track online misinformation questioning how effective warning labels are, noting that social media companies do not provide much data to quantify their impact.
On Thursday, Twitter gave some insight into that question. It said it had seen a 29% reduction in "quote tweeting" of labeled tweets — where users add their own commentary — which it attributed to a prompt warning users who tried to share them that they might be spreading misleading information.
That kind of extra step before sharing is what social networks call "friction." Adding friction is a significant change for Twitter, which has long prioritized the rapid flow of information and making it easier for users to share.
One change that Twitter had introduced as temporary that it now says will stay in place is a screen prompting people to quote tweet rather than simply retweet a post.
The prompt reduced retweeting and increased quote tweeting but collectively led to a 20% reduction in sharing tweets, the company said.
"This change introduced some friction, and gave people an extra moment to consider why and what they were adding to the conversation," the company officials wrote in their blog post. "In short, this change slowed the spread of misleading information by virtue of an overall reduction in the amount of sharing on the service."
Some changes Twitter put in place for the election are being rolled back, however. For example, the company will resume recommending tweets from people whom users do not already follow.
"While we had initially hoped that this would help reduce the potential for misleading information to spread on our service, we did not observe a statistically significant difference in misinformation prevalence as a result of this change," the officials wrote.