As the coronavirus spreads further into Europe and the Americas, another infection is spreading: an epidemic of misinformation online.
Dubbed the “infodemic” by health officials, the flood of posts includes conspiracy theories about the origins of the novel coronavirus, dangerous advice about spurious treatments, and unreliable reports of vaccines.
The World Health Organization is now aggressively trying stop the spread of misleading and false information around COVID-19 by forging an alliance with the big tech companies such as Facebook (FB) and Google. (GOOGL)
Andy Pattison, who’s leading the battle against the online menace for the WHO, told CNN Business that after witnessing the rapid spread of misinformation surrounding a measles outbreak in 2018, the organization started pushing social media and tech companies to give it a direct line to the platforms to flag posts that could harm people’s health.
Aside from bad information about the coronavirus and how it spreads, the WHO is especially concerned about so called “cures” or “treatments,” some of which involve ingesting products which are dangerous. And as the virus has spread way beyond China, so has the misinformation.
“Like a disease outbreak, it changes and fluctuates over the weeks,” Pattison said.
At first, the misinformation was about where the disease came from and who was carrying it. Then the focus shifted to treatment.
“The wave that we’re in now is quite a new one — the fear factor,” he said, referring to misinformation about face masks and hand sanitizer. “We’re trying to to quell these rumors about panic shopping, mass buying, and try and get to the bottom of the actual truth, which is going to help people.”
Pattison traveled to Silicon Valley last month and hosted a meeting of more than a dozen tech companies like Google and Facebook, Twitter (TWTR) and travel sites such as AirBnB and Expedia. He wanted to identify how best to tamp down on bad information and make sure facts from sources such as the WHO and the US Centers for Disease Control are what people see first.
Now he and his team are in daily contact with the social media platforms, flagging posts that need to be taken down, fast.
“We’re talking minutes sometimes if we’re all online at the same time,” Pattison said.
But Pattison’s team is small — the WHO’s communications team has 30 people and just three are dedicated to combing social media and flagging problematic posts. They use artificial intelligence tools to help keep watch on key phrases and accounts known for conspiracy theories.
Facebook, which has been slammed in the past for allowing politicians to lie on its platform, announced Tuesday that it is giving organizations including the WHO “as many free ads as they need” to get out accurate information.
In a post announcing the move, Facebook CEO Mark Zuckerberg said the company is also taking extra measures to remove fake claims and conspiracy theories, and blocking people from running ads that may try to “exploit the situation.”
‘It’s important that everyone has a place to share their experiences and talk about the outbreak, but as our community standards make clear, it’s not okay to share something that puts people in danger,” Zuckerberg wrote.
Pattison said some social media firms, which he declined to name, aren’t doing as much as Facebook or Google because they haven’t suffered as many “reputational knocks in the past.”
“This is a double-edged sword, because it’s also not only a reputation issue, but it’s also saving people’s lives,” he said. “It’s not about politics and people getting into office. There’s this extra element of humanity which is coming in.”
Facebook, Google, Twitter, TikTok and others have made efforts to promote links to reputable sources such as the WHO or government health agencies when users search for terms related to “coronavirus.”
But they’re not catching all the misleading posts, and it’s still easy to find bad information online, whether it’s a claim that the virus is connected to the spread of 5G, that the virus is a bioweapon, or sowing doubts around any forthcoming vaccines.
Pattison said he understands the platform’s efforts to balance freedom of speech with protecting users from dangerous information — but he said it’s a fine line.
“A lot of them do have a policy which says that as soon as content is dangerous to one of their users or dangerous to a human, they will be happy to take it down. So it’s basically finding what the tech company is willing to do and what we need to get done, and finding the sweet spot in the middle,” he added.