A Conversation with Norman Sadeh – Gigaom


About this Episode

Episode 90 of Voices in AI features Byron speaking with Norman Sadeh from Carnegie Mellon University about the nature of intelligence and how AI effects our privacy.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm I’m Byron Reese, today my guest is Norman Sadeh. He is a professor at Carnegie Mellon School of Computer Science. He’s affiliated with Cylab which is well known for their seminal work in AI planning and scheduling, and he is an authority on computer privacy. Welcome to the show.

Carnegie Mellon has this amazing reputation in the AI world. It’s arguably second to none. There are a few university campuses that seem to really… there’s Toronto and MIT, and in Carnegie Mellon’s case, how did AI become such a central focus?

Norman Sadeh: Well, this is one of the birthplaces of AI, and so the people who founded our computer science department included Herbert Simon and Allen Newell who are viewed as two of the four founders of AI. And so they contributed to the early research in that space. They helped frame many of the problems that people are still working on today, and they helped recruit also many more faculty over the years that have contributed to making Carnegie Mellon as the place that many people refer to as being the number one place in AI here in the US.

Not to say that there are not other many good places out there, but CMU is clearly a place where a lot of the leading research has been conducted over the years, whether you are looking at autonomous vehicles – for instance, I remember when I came here to do my PhD back in 1997, there was research going on autonomous vehicles. Obviously the vehicles were a lot clumsier than they are today, not moving quite as fast, but there’s a very, very long history of AI research, here at Carnegie Mellon. The same is true for language technology, the same is true for robotics, you name it. There are lots and lots of people here who are doing truly amazing things.

When I stop and think about [how] 99.9% of the money spent in AI is for so-called Narrow AI—trying to solve a specific problem often using machine learning. But the thing that gets written about and is shown in science fiction is ‘general intelligence’ which is a much more problematic topic. And when I stop to think about who’s actually working on general intelligence, I don’t actually get too many names. There’s OpenAI, Google, but I often hear you guys mentioned: Carnegie Mellon. Would you say there are people in a serious way thinking about how do you solve for general intelligence?

Absolutely. And so going back to our founders again, Allen Newell was one of the first people to develop what you referred to as a general theory of cognition, and obviously that theory has evolved quite a bit, and it didn’t include anything like neural networks. But there’s been a long history of efforts on working on general AI here at CMU.

And you’re completely true, that as an applied [science] university also, we’ve learned that just working on these long-term goals is not necessarily the easiest way to secure funding, and that it really pays to also have shorter term objectives along the way, things that can solve the accomplishments that can help motivate more funding coming your way. And so, it is absolutely correct that many of the AI efforts that you’re going to find, and that’s also true at Carnegie Mellon, will be focused on more narrow types of problems, problems where we’re likely to be able to make a difference in the short to mid-term, rather than just focusing on these longer and loftier goals of building general AI. But we do have a lot of researchers also working on this broader vision of general AI.

And if you were a betting man and somebody said ”Do you believe that general intelligence is kind of an evolutionary [thing]… that basically the techniques we have for Narrow AI, they’re going to get better and better and better, and bigger datasets, and we’re going to get smarter, and that it’s gradually going to become a general intelligence?”

Or are you of the opinion that general intelligence is something completely different than what we’re doing now—and what we’re doing now is just like simulated intelligence—we just kind of fake it (because it’s so narrow) into tasks? Do you think general AI is a completely different thing or it will gradually get to it with the techniques we have?

READ ALSO  IRCTC To Charge Service and Convenience Fees Again

So AI has become such a broad field that it’s very hard to answer this question in one sentence. You have techniques that have come out under the umbrella of AI that are highly specialized and that are not terribly likely, I believe, to contribute to a general theory of AI. And then you have I think, broader techniques that are more likely to contribute to developing this higher level of functionality that you might refer to as ‘general AI.’

And so, I would certainly think that a lot of the work that has been done in deep learning, neural networks, those types of things are likely over time with obviously a number of additional developments that people have, a number of additional inventions that people have to come up with, but I would imagine that has a much better chance of getting us there than perhaps more narrow, yet equally useful technologies that might have been developed in fields like scheduling and perhaps planning and perhaps other areas of that type where there’s been amazing contributions, but it’s not clear how those contributions will necessarily lead to a general AI over the years. So mixed answer, but hopefully…

You just made passing reference to ‘AI means so many things and it’s such a broad term that may not even be terribly useful,’ and that comes from the fact that intelligence is something that doesn’t have a consensus definition. So nobody agrees on what intelligence is. Is that meaningful? Why is it that something so intrinsic to humans: intelligence, we don’t even agree on what it is? What does that mean to you?

Well, it’s fascinating, isn’t it, that there used to be this joke and maybe it’s still around today, that AI was whatever it is that you could not solve, and as soon as you would solve it, it was no longer viewed as being AI. So in the ‘60s, for instance, there was this program that people still often talk about called Eliza…

Weiznbaum’s chatbots.

Right, exactly, simple Rogerian therapist, basically a collection of rules that was very good at sounding like a human being. Effectively what it was doing is, it was paraphrasing what we would tell you and say, “well, why do you think that?” And it was realistic enough to convince people that they were talking to a human being, while in fact they were just talking to a computer program. And so, if you had asked people who had been fooled by the system, whether they were really dealing with AI, they would have told you, “yes, this has to be AI.”

Obviously we no longer believe in that today, and we place the bar a lot higher when it comes to AI. But there is still that tendency to think that somehow intelligence cannot be reproduced, and surely if you can get some kind of computer or whatever sort of computer you might be talking about to emulate that sort of functionality and to produce that sort of functionality, then surely this cannot be intelligence, it’s got to be some kind of a trick. But obviously, if you also look over the years, we’ve gotten computers to do all sorts of tasks that we thought perhaps were going to be beyond the reach of these computers.

And so, I think we’re making progress towards emulating many of the activities that would traditionally be viewed as being part of human intelligence. And yet, as you pointed out, I think at the beginning, there is a lot more to be done. So common sense reasoning, general intelligence, those are the more elusive tasks just because of the diversity of – the diverse facility that you need to exhibit in order to truly be able to reproduce that functionality in a scalable and general manner, and that’s obviously the big challenge for research in AI over the years to come.

Are we going to get there or not? I think that eventually we will. How long it’s going to take us to get there? I wouldn’t dare to predict, but I think that at some point we will get there, at some point we will likely build – and we’ve already done that in some fields, we will likely build functionality that exceeds the capability of human beings. We’ve done that with facial recognition, we’ve done that with chess, we’ve done that actually in a number of different sectors. We might very well have done that – we’re not quite there, but we might very well at some point get that in the area of autonomous driving as well.

So you mentioned common sense, and it’s true that every Turing test capable chatbot I come across, I ask the same question which is, “What’s bigger, a nickel or the Sun?” And I’ve never had one that could answer it. Because nickel is ambiguous… That seems to a human to be a very simple question, and yet it turns out, it isn’t. Why is that?

READ ALSO  Google supercharges its mobile payment app in India with WeChat-style mini apps

And I think at the Allen Institute, they’re working on common sense and trying to get AI to pass like 5th grade science tests, but why is that? What is it that humans can do that we haven’t figured out how to get machines to do that enables us to have common sense and them not to?

Right. So these are, amazingly enough, when people started working in AI, they saw that the toughest tasks for computers to solve would be tasks such as doing math or playing a game of chess. And they thought that the easiest ones would be the sorts of things that kids, five-year-olds or seven-year-olds are able to do. It turned out to be the opposite, it turned out that the kinds of tasks that a five-year-old or a seven-year-old can do are still the tasks that are eluding computers today.

And a big part of that is common sense reasoning, and that’s the state of the art today. So it’s the ability to somehow – so we’re very good at building computers that are going to be ‘one-track mind’ types of computers if you want. They’re going to be very good at solving these very specialized tasks, and as long as you keep on giving them problems of the same type, they’re going to continue to do extremely well, and actually better than human beings.

But as soon as you’re falling out of that sort of well-defined space, and you’re opening up the set of context and a set of problems that you’re going to be presenting to computers, then you find that it’s a lot more challenging to build a program that’s always capable of falling back on its feet. That’s really what we’re dealing with today.

Well, you know people do transfered learning very well, we take the stuff that we…

With occasional mistakes too, we are not perfect.

No, but if I told you to picture two fish: one is swimming in the ocean, and one is the same fish in formaldehyde in a laboratory. It’s safe to say you don’t sit around thinking about that all day. And then I say, “Are they at the same temperature?” You would probably say no. “Do they smell the same?” No. “Are they the same weight?” Yeah. And you can you can answer all these questions because you have this model I guess, of how the world works.

That’s right.

And why are we not able yet to instantiate that into a machine do you think, Is it that we don’t know how, or we don’t have the computers, or we don’t have the data or we don’t know how to build an unsupervised learner, or what?

So there are multiple answers to this question. There are people who are of the view that it’s just an engineering problem, and that if in fact, you were to use the tools that we have available today, and you just use them to populate these massive knowledge bases with all the facts that are out there, you might be able to produce some of the intelligence that we are missing today in computers. There’s been an effort like that called Cyc.

I don’t know if you are familiar with Doug Lenat, and he’s been doing this for, I don’t know, how many years at this point. I’m thinking something like close to 30 plus years, and he’s built a massive knowledge base and actually with some impressive results. And at the same time, I would argue that it’s probably not enough. It’s more than just having all the facts, it’s also the ability to adapt and the ability to discover things that were not necessarily pre-programmed.

And that’s where I think these more flexible ways of reasoning that are also more approximate in nature and that are closer to the types of technologies that we’ve seen developed under the umbrella of neural networks and deep learning, that’s where I think there’s a lot of promise also. And so, ultimately I think we’re going to need to marry these two different approaches to eventually get to a point where we can start mimicking some of that common sense reasoning that we human beings tend to be pretty good at.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com