Twitter now labels misleading coronavirus tweets with a misleading label

A woman wearing medical gear walks down the street and looks at a phone.Twitter is playing catch up in building a strategy to combat coronavirus misinformation. | Marcelo Endelli/Getty Images

Following in the footsteps of Facebook, Twitter is flagging coronavirus hoaxes and misinformation for users.

Amid a surge of misinformation triggered by the Covid-19 pandemic, Twitter has announced yet another update to its policies regarding conspiracy theories and fake news. Now, Covid-19 tweets that are considered misleading will get labels designed to offer more context about the information therein. Tweets making potentially harmful claims disputed by experts will now come with a more direct warning message.

Twitter explained in a blog post that its new strategy for dealing with problematic tweets takes a three-pronged approach. The specific action taken will depend on whether the company deems the claims in a tweet as “misleading,” “disputed,” or “unverified.” Based on how severe the claims made in the tweet are, the company will apply either a label or a warning. Here’s a handy graphic for which types of content get a label versus a warning — or no action at all:


Removal is obviously the most serious consequence outlined here. A warning is next-most serious. Content with disputed information that Twitter thinks carries a severe propensity for harm will get one of these warning messages.

In this case, the entire tweet might be blocked off, with a note that the content shared “conflicts with guidance from public health experts.” Twitter said these changes will also be applied to tweets sent before today.

Twitter will block out some misleading information by noting that it goes against the guidance of public health authorities.

For tweets deemed moderately harmful, Twitter will insert a link underneath the tweet directing users to reliable information. This strategy mimics the company’s approach to deepfakes, which was announced earlier this year. But it’s not quite clear how the note — “Get the facts about COVID-19” — will be interpreted since it doesn’t actually contradict anything said in a false tweet. Users must also click through the label to get redirected to a page with related fact-checked content. For example, here’s what you’ll see if you click on a label that’s been attached to a tweet about 5G and the coronavirus.

A screenshot of a tweet.Twitter
Twitter is flagging tweets about the novel coronavirus with labels that tell users to “Get the facts about COVID-19.”

Speaking of tweets about 5G and the coronavirus, Recode did a quick search and identified several tweets pushing the conspiracy theory that the cellular technology and the pandemic are related. Many of those tweets — though not all — included the new label. But it appears that simply mentioning 5G and the coronavirus together might automatically generate the label. For now, it’s not hard to find tweets discuss 5G and the pandemic that have nothing to do with the conspiracy but also received the new label.

Twitter’s new approach follows a similar one from Facebook involving its own third-party fact-checkers, which was announced in March. Currently, if you see a false post on Facebook that’s been flagged, you’ll be warned and directed to content from a professional fact-checking organization. Last month, Facebook announced that it would retroactively provide links to verified Covid-19 information to users who had previously “Liked,” reacted, or commented on “harmful misinformation” related to the pandemic.

Still, Twitter’s update feels late. As we’ve previously reported, misinformation about the novel coronavirus pandemic has become increasingly prevalent since January, and social media platforms are fertile ground for all sorts of wild conspiracy theories, from the idea that 5G cellular technology is somehow behind the novel coronavirus to the notion that the virus was stolen from Canada. Fake news isn’t the only content moderation challenge weighing down these platforms. Facebook, Instagram, and Twitter are also battling an influx of posts and ads for products that push fake antibody tests and cures, while working with insufficient staff to chase other harmful and offensive content off their sites.

But this surge in a particular category of misinformation — and the threat it presents to public health in the middle of a pandemic — has forced the social media giants to reconsider their approach to content moderation. Twitter, specifically, has already expanded its definition of “harm” on the platform to include tweets that defy recommendations from public health authorities. The company has also taken down some, but not all, of tweets now deemed “harmful.” Rather than just labeling certain types of fake news, as previous policies had allowed, Facebook has been aggressively taking down a whole host of misleading posts and entire quarantine protest groups.

Incorrect health information about the coronavirus threatens lives, which might explain why companies like Twitter and Facebook are now acting more aggressively. Still, questions about how fake news and conspiracy theories will continue to threaten the public well-being remain unresolved. Incremental policy updates are good, but they are no panacea.

Support Vox’s explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.