Britannica CEO Talks Google, Wikipedia and What Lil Pump Can Teach Us About Credibility


There was a time not long ago when people claimed to trust Google news results and Wikipedia articles more than the media outlets they linked to. In other words, up until pretty recently Googling was synonymous with a question answered. But that may be starting to change, at least according to one survey, as public trust in just about anything written continues to erode.

It’s almost enough to make you long for our analog past, when print texts like the Encyclopedia Britannica were the foremost authority on any subject they chose to converse upon. That’s certainly the sense you get in speaking with Karthik Krishnan, Britannica’s current CEO, who today doesn’t think much of search engines or their results—and especially of super competitor Wikipedia.

In a wide-ranging interview with EdSurge on trust and deception on the World Wide Web that sometimes strayed into bizarre territory (such as sussing out the internet’s definitive source on Miami rap artist Lil Pump), Krishnan spoke about convenience over quality, the problems with crowdsourcing and why we should once again consider paying for information. The conversation has been edited for length and clarity.

As always, what you choose to trust is up to you.

EdSurge: Before this interview, I did a little reading about Britannica on the encyclopedia. But it turned out I could only read the first 100 words of your own entry when I entered from Google. Is that gated information approach typical?

Krishnan: First off, I’m surprised our corporate information is gated. We have actually have almost 80 percent of our content available for free right now. It’s part of an evolution where the business models are changing. We at Britannica realize that it’s not enough to provide good information, but that good information needs to be easily available. And today most people discover that information using search engines. So if it’s not available or not searchable, then even though you have all this great value to offer to the world, that goes away.

And people trust our information. Because today with the internet what ends up happening is when you do a search on any of the major search engines, they provide you relevance. But relevant content doesn’t mean that it’s right or reliable. And you have multiple cases where a lot of bad information shows up on the front page of all these search engine result pages. We take pride in the fact that we provide verified and trusted information. Our goal is to help people cut through the clutter.

Does that mean you’re putting a focus on search engine optimization (SEO)?

SEO is definitely an area that we are focusing on. And we recently launched a product called Britannica Insights, which is an extension to the Chrome browser. The extension shows information on the top right of the search engines’ results page. And we only display information when we have something relevant to offer.

If you’re doing a search on, say, the French Revolution, we not only give you the fact box and all the dates, we also talk about the different types of participants. So we are able to provide not only an answer, but give people an opportunity to understand the whole concept at a very different perspective.

When you use the extension, users can click a link that takes you to Britannica’s webpage. Was part of a strategy to drive additional subscriptions to Britannica?

Any content that you click on and go onto the website is going to be available for free. So there is no compulsion for people to subscribe. We felt that to be true to our mission of elevating good information, we had to make a meaningful difference to the world. Right now there is no reason for people to subscribe. But longer term, I hope people will start gravitating toward a world where they feel that they’re willing to pay for reliable and relevant information. If that’s not the case, then the world is going to get more into clickbait. Most of the search engines are actually throwing clickbait out there, and people are trying to modify advertisements.

READ ALSO  Google confirms “Google for Jobs” job search rolling out in coming weeks - Search Engine Land

Who would you categorize as your ideal subscriber?

I would actually put the sweet spot around kids and parents and teachers. Because these are the people who are trying to provide great information. And most of our information is not newsy. It’s about the foundation of knowledge.

Students probably do understand one reliable source that comes up frequently on page one results: Wikipedia. Why would they would want to consider a subscription service in the age of the free encyclopedia?

As I said, most of our information is available for free on Britannica.com, even though the complete database is not available. The reason why you should consider paying is because with any kind of user-generated content, the answer could be right today and wrong tomorrow. Because there are competing factions that are going there and changing the information that’s out there. Do you really want to build your foundation of knowledge based on information that might not be true? And second, if you speak to most teachers they would say that Wikipedia is not a strong primary source of reference that should be cited.

What’s the solution for people who would never think to download an extension like this? Aren’t they most at-risk for falling for fake news?

I think one of the things that people value today is convenience. So why do people go to a search engine, or quickly ask Siri or Alexa? It’s convenient. People place a higher premium on convenience than they do the quality of information. That’s the world that we live in today.

We thought that by providing them easy tools where you install it once, and we only show up when we have relevant information to offer, we are starting to hit on the convenience dimension.

Some of the other things that we will be doing, which haven’t been launched yet, are focusing on making it convenient and easy for people to get to credible information. Because how often have you gone to the second page or the third page of a search engine’s results page?

Only when it’s about myself.

Right, when it’s about yourself. What ends up happening is credible information in a number of cases is buried in the back pages. But you and I have no time—even though you’re a journalist and I’m an educated person who believes in these things, so I pretty much exhibit the same behavior you do.

At the same time, if [good information] shows up on the top right of the search engine’s results page, it’s hard for you to ignore it. But unfortunately the SEO systems work based on social algorithms, which seem to promote popularity as opposed to really good quality information.

What are your thoughts on crowdsourced editing, like the kind found on Wikipedia?

I think crowdsourced information is great. But let’s not pretend that everybody who goes onto the internet has good intentions. As a journalist, you adhere to a code of ethics, which says you’re going to represent things fairly. You’re going to be accurate and thorough in everything that you put out there. But unfortunately not everyone putting things out there adheres to that Hippocratic oath. And today people realize that they can actually use those platforms to push their own agendas. So ultimately that is the challenge that we’re trying to deal with. User-generated content has value, but at the same time, people have to use it with caution. And ultimately that’s not the mindset with which people operate.

[Editor’s note: At this point in the conversation, Krishna decides to cement his point about the variability of internet credibility by asking me to perform a search to contrast two user-generated articles on the rapper Lil Pump.

Lil Pump (via Instagram)

It’s a tough lesson to conduct over the phone, but after some wrangling I find Lil Pump’s Wikipedia page and his entry on Everipedia, a for-profit wiki with looser standards for article creation popular with fan communities. Where Wikipedia sticks to a brief outline of the rapper’s career and arrest record, Everipedia opts for a deeper dive into his social media presence, touring schedule and affiliations with other minor celebrities. There’s also a gallery of fan art, a passport photo for the artist and an unsourced estimation of his net worth.

To Krishnan, those extras make Everipedia the superior authority on all things Lil Pump (especially given that he does not currently have his own Britannica entry). “When it comes to user generated content, some of it seems to be more valuable than others,” Krishnan says. “The point is that search engines are supposed to do a great job of surfacing great content. And probably Everipedia is on page five when you scroll through the search results.”]

You appear to be stating that there’s a lot of misinformation on sites like Wikipedia. But most of the information you will read on Wikipedia is correct.

READ ALSO  Digital Writer job with Timbuktu Labs

So the question here is it could right today, it could be wrong tomorrow, right? I don’t know if you followed the case a few weeks ago about the California Republican Party. [Editor’s note: Because of an instance of Wikipedia vandalism, in May Google listed “Nazism” as a core ideology of the Calif. Republican Party on its search results until corrected.]

The question here is, are you comfortable living in a world where you’re okay with information being right 80 percent of the time? Or have we as a human race dialed down our expectation to say that as long as it’s fine 80 percent of the time, we don’t care about the 20 of the time when it’s wrong? Is that the world that we need to be working toward, and is that good enough?

A 2005 study in Nature found a similar rate of “serious errors” between scientific articles in Wikipedia and Britannica. How do you guard against those errors in your process?

We go through a strict editorial fact-check process. Can errors happen? They can always happen. But are those errors based on proactive misinformation? The answer is no. Whereas with user-generated content the ability to provide misinformation proactively is extremely high.

And you’re also referring to a study that was done 13 years ago. The world has changed in the last 13 years. That might have been the case then with the comparison between Wikipedia and Britannica. But I think more and more, if you did that same study again, the results might be very different today. And I think you don’t have to look farther than the last few weeks. It’s not just that case of Nazism and California—there have been other cases that have surfaced as well.

People now feel that they can use these technologies as a platform to push their own agenda. It’s not because of Google or Wikipedia. It’s just the fact that people realize they can hijack these platforms to push their own interests. And search engine technology tends to surface this information without validating that information. To me, people have the right intent. I don’t think any search engine or any user-generated content site was built with the objective of duping the world. But unfortunately that’s what’s happening today.

This is strong rhetoric. Duping the world? It almost seems like scaremongering…

I don’t think it’s scaremongering. At the same time we’re not saying, ‘Don’t go to that source.’ I think each of us [Britannica and Wikipedia] have a value. We’re suggesting people to go to an information source that has some standards and something that you can trust.

We all look at the top few search results, and the question we are asking ourselves is, “Should we actually take the time to make sure that that information that we are consuming is valid?” We never came out and said, “This is the only solution.” Our goal is to help create tools and technologies that will help surface better information. And if it’s easy for you, we hope you will take that opportunity. And if you choose not to, there’s nothing we can do.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com