The World Health Organization has called the novel coronavirus an “infodemic” and the topic of disinformation and “fake news” has remained at the forefront of this century’s worst pandemic, with social media and tech platforms playing a central role. COVID-19 has forced many companies to move to remote work, and tech platforms and social media companies were not exempt. But in many cases the human moderators who assess whether content violates the platforms’ terms of service are largely unable to do that work from home, so companies like Google’s YouTube, Facebook, and Twitter announced that they would be moving to automate much of their content moderation. Just days after Facebook’s announcement, however, reports emerged that posts from news sites such as Buzzfeed, USA Today, the Seattle Times, and the Dallas Morning News were being blocked on the social network.
Journalists rely on these platforms to report and disseminate their work, and regardless of the reason for the removal of journalistic content, we need to know when news content is affected. And we should be able to track and analyze this data, since we know that filtering systems purportedly aimed at removing extremist content or copyright content can result in legitimate journalistic material being removed and thus amount to a press freedom violation.
That is why the Committee to Protect Journalists joined 75 organizations and researchers in calling on social media and content sharing platforms to preserve information about content that is automatically removed or blocked by automated systems. Without this data it will be virtually impossible for journalists to report on the “infodemic” aspect of COVID-19, much less understand how news content is impacted by automated content moderation.
Read the open letter here.