Facebook's head of global policy management, Monika Bickert, testifies at a Senate hearing in January on monitoring extremist content online. Companies like Facebook and Google are at the forefront of how much of the world receives its news. (AFP/Getty Images/Tasos Katopodis)
Facebook's head of global policy management, Monika Bickert, testifies at a Senate hearing in January on monitoring extremist content online. Companies like Facebook and Google are at the forefront of how much of the world receives its news. (AFP/Getty Images/Tasos Katopodis)

Tweaking a global source of news

The only way Abdalaziz Alhamza and his fellow citizen journalists could get out news from the Islamic State’s self-declared capital in Syria to a global audience was by posting materials on Facebook and YouTube. “They were the only way to spread news since many militias and governments prevented most, if not all, the independent media organizations to work in the conflict areas,” explains Alhamza, one of the co-founders of the group Raqqa Is Being Slaughtered Silently. “Without the social media platforms, the Arab Spring would be killed on the first day.”

Internet intermediaries are increasingly playing the role that publishers and editors once played. From selecting sources to curating trending news to deciding which news is real or fake, companies like Facebook and Google are at the forefront of how much of the world receives its news. Taken together, these internet giants are 10 times the size of the largest media organization 15 years ago, according to media expert Robert McChesney.

More from Columbia Journalism Review

The Reuters Institute 2017 Digital News Report found that more than half of all online users across the 36 countries surveyed said they use social media as a source of news each week, ranging from 76 percent in Chile to 29 percent in Japan and Germany. The report found Facebook and its subsidiary WhatsApp, in particular, played an increasingly significant role in news distribution, with 44 percent of people using Facebook as their news source and WhatsApp rivaling its parent company in several markets.

Far fewer respondents were able to recall which outlet provided the news, however, a problem for an industry that is increasingly forced to adapt to the logic of the social media platforms that are central to modern journalism. “They’re kind of our frenemies, because they carry our content, but we’ve been disintermediated from the relationship,” says Danielle Coffey of the News Media Alliance, which represents 2,000 US publications. “And they’re making up now the rules on what’s appropriate, what’s effective, what people should not get access to.”

The success of misinformation, counterfeit news, and “computational propaganda” on social media platforms has highlighted the economic incentives embedded in social media platforms that not only helped “fake news” flourish but may even work against quality journalism. In seeking to combat the proliferation of “fake news,” Google and Facebook launched partnerships with fact-checking organizations, but also tweaked their algorithms and sought ways to surface more authoritative content. The signals they use, however, may end up marginalizing those on the outer edges of the ideological spectrum or freelance journalists in favor of larger, more established, and mainstream outlets.

The World Socialist Web Site noticed a massive drop in Google search referrals following the announcement of Project Owl in mid-2016, according to its editorial chairman. The site also found a significant drop in traffic to other “leading socialist, progressive and anti-war web sites,” including Democracy Now.

“There does appear to be a correlation between some of the updates Google has released and a drop in traffic on some of those sites,” says Eric Richmond, president of Expert SEO Consulting, which counts several media organizations among its clients. “Can I say that drop is due to a particular site or class of sites being targeted? No. Google doesn’t target specific sites with algorithmic changes.”

Google said in December that its trust and safety teams had manually reviewed nearly 2 million videos for violent extremist content in the previous five months to help train its machine-learning technology to identify similar videos in the future, and that it aims to hire 10,000 human moderators in 2018.

Facebook, meanwhile, has said that it provides human review of all content flagged for removal. With 2 billion monthly users worldwide, this involves tens of thousands of reviewers in 40 languages who, according to a Facebook spokesperson, comprise native speakers and people with “market-specific knowledge.” But there is no way to independently audit the content removed by the platform, and Facebook often cites privacy concerns when researchers ask for greater access to data.

Facebook had a human team of editors curating Trending Topics. Leaked documents showed editorial intervention at several stages of the trending news operation, from decisions about “injecting” and “blacklisting” topics from the trending feed to which sources were authoritative and trustworthy. But accusations that it was suppressing conservative news led to a backlash against the internet behemoth. In response, Facebook tweaked its approach and switched to algorithmic curation.

In September, The Daily Beast reported that Facebook accounts reporting on or documenting what the UN has termed a “textbook case of ethnic cleansing” of the Muslim-minority Rohingya population in Myanmar were also being shuttered or removed. The company has not responded to questions about the political or ethnic makeup of the moderators deciding on content related to the crisis in Myanmar.

In late 2016, Facebook revised its community guidelines to allow graphic content that is “newsworthy, significant, or important to the public interest,” which would include both the conflicts in Syria and Myanmar. But weeks before the plethora of one-sided removals came to light, Facebook had placed the Arakan Rohingya Salvation Army, a designated terrorist group in Myanmar that claims to comprise freedom fighters, on its list of dangerous organizations that are prohibited from using the platform. Google also prohibits violent extremism.

The largest internet firms have banded together to create a shared database of terrorist images and videos that they’ve removed from their platforms. Google News, which is entirely algorithmically generated, recently updated its guidelines to prohibit misrepresentation of ownership or country of origin, or the deliberate misleading of users.

But policymakers are not satisfied with the self-regulatory measures. Germany’s new Network Enforcement Act, known as NetzDG, requires major online platforms remove, within 24 hours of notification, “obviously illegal” content or face fines of up to €50 million. And the UK prime minister in September called for companies to remove extremist content shared by terrorist groups within two hours and develop technology to prevent it from being shared from the outset.

[EDITOR’S NOTE: This article first appeared in Columbia Journalism Review’s “Winter 2018 Issue,” published in partnership with CPJ.]