Facebook is improving its response to the spread of coronavirus misinformation on its platform. The company has announced that it will begin displaying anti-misinformation messages in individuals’ News Feed if those people have liked, reacted or commented on content Facebook has since removed due to factual inaccuracies.
Announced in a blog post, Guy Rosen, VP Integrity outlined how Facebook is combatting misinformation on Covid-19:
“Stopping the spread of misinformation and harmful content about COVID-19 on our apps is also critically important. That’s why we work with over 60 fact-checking organizations that review and rate content in more than 50 languages around the world. In the past month, we’ve continued to grow our program to add more partners and languages.”
He goes on to add that “Once a piece of content is rated false by fact-checkers, we reduce its distribution and show warning labels with more context. Based on one fact-check, we’re able to kick off similarity detection methods that identify duplicates of debunked stories.”
But clearly Facebook doesn’t think that’s enough to stop people believing things they read on Facebook, so Rosen also explained that messages (such as the ones above and below) will begin appearing in the News Feed of people who’ve interacted with misinformation.
The tone does not appear to be judgemental and is a gentle and timely way to raise awareness. It might direct the user to the World Health Organisation website or other fact check bodies.
Rosen concluded that “We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook.”
Facebook said the change would start to appear in “the coming weeks”.