Machines got us into this mess, but humans may help get us out of it. In the wake of the 2016 election and the ongoing manipulation of news by foreign governments and domestic trolls, Google, Facebook and Twitter have endured intensifying criticism and had to face the limitations of their algorithms as arbiters of truth.
These issues have been well-documented but most recently arose again in the wake of the Las Vegas and Texas church mass shootings. Google, YouTube and others were duped by parties seeking to spread false information about the identity of the shooters in each case.
Among other efforts to combat the proliferation of fake news and false information in search, Google announced that it’s teaming up with The Trust Project, which was founded by Craig Newmark of Craigslist but is now run by the Markkula Center for Applied Ethics at Santa Clara University in Northern California.
In a blog post, Google explains:
The Project, which is funded by Google among others, has been working with more than 75 news organizations from around the world to come up with indicators to help people distinguish the difference between quality journalism and promotional content or misinformation.
In a first step, the Project has released eight trust indicators that newsrooms can add to their content. This information will help readers understand more about what type of story they’re reading, who wrote it, and how the article was put together.
These trust indicators include information on how stories were researched, information about the author and the journalistic standards used by the publication in compiling and supporting its stories.
As a practical matter, the trust indicators are not unlike ranking factors and are largely going to be embedded in structured markup so that Google can read them as it crawls news sites. Google says it’s still figuring out “how to display these trust indicators next to articles that may appear on Google News, Google Search, and other Google products.”
I applaud Google’s effort to separate credible from untrustworthy news and content, but I fear this system is misguided and too complex to accomplish its ultimate objective.
What should be happening is clear third-party labeling or certification of publications as trustworthy, rather than exposing people to the underlying criteria and putting the burden on them to evaluate the credibility of the source. Publications should submit themselves for certification, and once accomplished, Google should simply display the label.
Readers should be able to see what’s behind the labeling scheme, but they should be able to tell at a glance whether an item is from a credible source, not have to spend time evaluating it based on a range of factors that may be obscure to them.