Earlier this week, Facebook announced it would be reintroducing human editors to curate News Tab, a dedicated news section. The company is currently pursuing partnerships that would allow it to promote articles from publications like The New York Times, The Wall Street Journal, The Washington Post, and The Atlantic.* According to a Wall Street Journal report, the talks are still ongoing.
Read: How Facebook’s push into video cost hundreds of journalists their jobs
Facebook has been unusually clear in emphasizing precisely what its humans will do and what its algorithms will do. Top stories and breaking news will be chosen by human editors, while the majority of the content will be served algorithmically, based on the data Facebook already has on you.
News Tab is Facebook’s chance to reboot its approach to news. In many ways, it can be seen as an act of atonement for the company’s attention-optimization strategy, as well as its now-defunct Trending Topics product. Reports of the News Tab feature came just as Facebook released the results of its conservative-bias audit, which polled 133 conservative lawmakers and groups about their concerns over how the platform treats right-wing content (and did not provide any evidence that such content is disadvantaged on Facebook). In 2016, a Gizmodo report alleged that Facebook told the human editors on the Trending Topics team to push down conservative news sources. Facebook responded at the time by saying it takes “allegations of bias” seriously and its editorial guidelines did not “permit the suppression of political perspectives [or] the prioritization of one viewpoint over another.”
The bias claims put Facebook in a difficult position. Over the years, the CEO Mark Zuckerberg has touted algorithms as an effective means to deter hate speech and fake news. In terse meetings with Republican lawmakers, Zuckerberg has seemingly pivoted from the Trending Topics fiasco and deemphasized the role of humans in its algorithmic moderation systems. The underlying message is that human involvement could risk biases or subjectivity, so critical projects like filtering out hate speech must have at least minor human involvement.
“We need to rely on and build sophisticated AI tools that can help us flag certain content,” Zuckerberg said at a congressional hearing last year. Faced with a dozen examples from committee members in which conservative content was taken down erroneously, Zuckerberg promised that the company would rely more on AI to provide answers. Noting that ISIS-related content is flagged by AI before human moderators even see it, he said, “I think we need to do more of that.”
Read: Facebook users still don’t know how Facebook works
For years, Facebook has selectively emphasized and de-emphasized humans’ role in its algorithmic systems to its own advantage. News Tab emphasizes humans’ role as a means of avoiding serving popular but unverified content. By contrast, in discussing content moderation, Zuckerberg has emphasized the role of AI, perhaps to mitigate criticism about the effect this work can have on the people who do it. When ProPublica found that Facebook allowed employers to buy ads that targeted “Jew haters,” the social network pointed the finger at the algorithms it said weren’t programmed to filter hate speech.** When Trending Topics surfaced viral videos of the ice-bucket challenge but not Black Lives Matter protesters, Facebook recruited humans in order to determine news “relevance.”