Twitter announced Thursday a new "crisis misinformation policy," which it said is a bid to ensure "viral misinformation" is not amplified on the social media website during conflicts and other crises.
The new policy "will help to slow the spread by us of the most visible, misleading content, particularly that which could lead to severe harms," Twitter said in a blog post. The policy will first be applied to content concerning Russia's war against Ukraine before being expanded to include other crises.
"In times of crisis, misleading information can undermine public trust and cause further harm to already vulnerable communities," wrote Yoel Roth, Twitter's head of safety and security.
"During moments of crisis, establishing whether something is true or false can be exceptionally challenging. To determine whether claims are misleading, we require verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more,” he said.
As part of the new initiative, Twitter will put warning labels on tweets with misleading information with an emphasis on high-profile accounts, including state-affiliated media outlets, verified accounts and government accounts.
Twitter will also ensure it does not amplify or recommend posts that contain misleading claims. That includes content on the home timeline, as well as the search, and explore tabs.