Authorities decry the proliferation of misinformation and propaganda on the internet, and technology companies are wrestling with various measures to combat fake news. But addressing the problem without infringing on the right to free expression and the free flow of information is extremely thorny.
A look at abuse of media laws by authoritarians around the world is a clear warning against government regulation of information. At the same time, relying on internet platforms to filter or verify information could result in the privatization of censorship. Any self-regulation by tech companies must be transparent, subject to independent oversight, and include some sort of path to remedy for those affected.
The debate over fake news comes as some observers say misinformation and propaganda have damaged democracy in places including the U.S. and U.K. by interfering in the 2016 presidential election and the British referendum on whether to exit the European Union.
It also comes as President Donald Trump has used the term "fake news" as an epithet for journalists and media outlets he dislikes or with which he disagrees.
Pillorying journalists and media outlets with such a label can create a rationale for clamping down on a free and independent press and create a climate for self-censorship. Many authoritarian countries criminalize the publication of what they commonly call false news, censoring content, shuttering news outlets, and jailing journalists on the charge, which is often levied against information critical of or unwanted by those in power. At the end of 2016, at least nine journalists were in jail worldwide for violating statutes on false news, according to CPJ's most recent annual prison census.
China, consistently one of the worst jailers of journalists worldwide, has led the way in enacting vaguely worded restrictions encouraging journalists to adhere to the official narrative or risk being branded false news and charged with a crime.
"The real purpose of the Chinese government's so-called 'controlling the spread of rumors' is to control the spread of truth," said Liu Hu, a journalist who spent more than a year in jail in China after being accused of fabricating and spreading rumors. "Only through making people live in an environment that they do not know the truth can the government maintain its rule."
Such restrictions show how any official steps by Western governments to counter misinformation would set a dangerous template for countries without democratic safeguards.
Nonetheless, in the U.S., the 2017 National Defense Authorization Act included a bi-partisan law to create a "whole-of-government" approach to countering propaganda and disinformation. It enhances the role of the State Department's Global Engagement Center, which thus far has focused on countering extremist content such as that generated by the militant group Islamic State, to include the advancement of "fact-based narratives that support U.S. allies and interests," according to a summary from the bill's co-sponsor, Senator Rob Portman.
The effort is designed to counter Russia, which has a global media operation in addition to a sophisticated propaganda machine. The pro-Kremlin Internet Research Agency, for example, employs hundreds of people to engage on social media platforms and promote a pro-Russian perspective, and appears to operate a network of pro-Kremlin websites including the Federal News Agency, according to The New York Times, which cited an unspecified Russian media report that the agency's budget is at least 20 million rubles (US$337,000) a month, surpassing that of many news outlets.
In Germany, which has federal elections coming up, politicians have called for the creation of legal obligations for "market-dominating" social media platforms to remove fake news and proposed fining them up to 500,000 euros per post (US$532,000) for failing to do so promptly, according to news reports.
The president of the European Parliament has similarly called for EU-wide laws and threatened financial sanctions. "Fake news should become expensive for companies like Facebook if they don't stop its spread," the president of the European Parliament, Martin Schulz, told the newspaper Waz.
About 44 percent of the American public gets their news on Facebook, according to the Pew Research Center. Since social media platforms play such a central role in disseminating and circulating news, the role of their algorithms in determining and amplifying specific content, which in turn impacts the bottom line of the content providers, creates an incentive system in which more clicks equals more money and greater visibility. Meanwhile, bots are used to game this system by automating and amplifying posts even as social media platforms try to figure out ways to combat them.
"Regulatory and legal responses are not the right response," said Rebecca MacKinnon, director of Ranking Digital Rights and a member of CPJ's board of directors, who advocates for tech companies to engage in self-regulation that is transparent, accountable, and provides an avenue for remedy.
There are precedents for requiring internet giants to ban or remove fake news. The European Commission's Code of Conduct for Countering Illegal Hate Speech Online commits the companies to removing reported hate speech within 24 hours from Facebook, Twitter, Microsoft and YouTube. Similarly, the so-called Right to be Forgotten created a mechanism for individuals to request content removal, and thus a mechanism for handling, assessing, and implementing complaints is in place. And Facebook, Google (including YouTube), Twitter and Microsoft have publicly committed to sharing data on content they deem to be "extremist" and have removed from a given platform, demonstrating how hashes, essentially a form of digital fingerprinting, can be used to facilitate content removal across multiple platforms.
Currently the tech companies' initiatives are voluntary, but they could be made mandatory, which would set a troubling "precedent for cross-site censorship," according to the non-profit Center for Democracy and Technology. As former CPJ internet advocacy coordinator Danny O'Brien wrote while exploring whether selective censorship could prevent more widespread blocking: "What governments can oblige companies to do is heavily influenced by what the companies themselves have previously built."
Codes of conduct, internet referral units, and policies such as the "right to be forgotten" devolve to private companies the responsibility to implement vague requirements outside the rule of law or judicial oversight.
An algorithmic ombudsperson could assess the policies of private tech companies and the assumptions on which algorithms are based to ascertain the impact on the public interest. And any remedy must be more effective than the black box that online complaint mechanisms currently invoke.
Tech companies are not waiting for government intervention. Twitter has stepped up its removal of offending accounts, including those belonging to the so-called alt-right in the U.S., and has refused to verify some accounts or even removed their verification status, a symbol of authenticity, according to USA Today. Twitter closed the accounts of several people associated with the alt-right, including a technology journalist for Breitbart News and an executive at Business Insider, according to news reports.
Even as private companies dispute that they play the role of publisher, the major tech platforms such as Facebook, Google, and Yahoo all exert some form of editorial control over news content. They have adopted practices that are more similar than not to publishing, such as creating partnerships with journalistic organizations, fact-checking, removing or restricting content, and curating news.
Google and Facebook have updated their policies on their advertising systems to ban sites that traffic in misinformation and disinformation. Google has taken action against hundreds of sites, according to news reports. Facebook and Google have implemented a new tool to allow users to flag "hoaxes" and has partnered with a handful of third-party fact-checking organizations to flag "disputed" news. Users who try to share this content will be notified and the company will prevent these posts from being promoted or turned into ads, according to a press release about the changes.
"I think anytime you have a new political term that is being widely used, and especially when its being involved to address suppression of information, there's cause for real concern to the sense it's not defined," said journalist Glenn Greenwald, co-founder of The Intercept news website. "I still don't know what fake news is, kind of like terrorism," he said, adding that both are ambiguous terms that are too often undefined and provide cover for repressive tactics.