A Conversation with Pieter Abbeel – Gigaom


About this Episode

Episode 93 of Voices in AI features Byron speaking with Berkeley Robotic Learning Lab Director Pieter Abbeel about the nature of AI, the problems with creating intelligence and the forward trajectory of AI research.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is voices in AI brought to you by GigaOm, I’m Byron Reese. Today I’m super excited we have Pieter Abbeel. He is a professor at UC Berkeley. He’s the president, founder and chief scientist the Covariant.ai. He is the founder of Gradescope. He holds an undergraduate degree in electrical engineering and an MS degree from Stanford, a PhD from Stanford in computer science and probably a whole lot more. This is gonna be an exciting half hour. Welcome to the show, Pieter.

Pieter Abbeel: Thanks for having me Byron.

There are all these concepts like life and death and intelligence that we don’t really have consensus definitions for. Why can’t we come up with a definition on what intelligence is?

Yeah it’s a good question. I feel like traditionally we think of intelligence as the things that computers can’t do yet. And then all of a sudden when we manage to do it, we understand how works and we don’t think of it as that intelligent anymore, right? It used to be okay, if we can make a computer play checkers that would make it intelligent, and then later we’re like, ‘Wait, that’s not enough to be intelligent, and we keep moving the bar,’ which is good to challenge ourselves, but yeah it’s hard to put something very precise on it.

Maybe the way I tend to think of it is that there are a few properties you really want to be true for something to be intelligent, and maybe the main one is the ability to adapt to new environments and achieve something meaningful in new environments that the system has never been in.

So I’m still really interested in this question of why we can’t define it. Maybe… you don’t have any thoughts on it, but my first reaction would be: if there’s a term you can’t define, maybe whatever it is doesn’t actually exist. It doesn’t exist; there’s no such thing, and that’s why you can’t define it. Is it possible that there’s no such thing as intelligence? Is it a useful concept in any way?

So I definitely think it’s a useful concept. I mean we definitely have certain metrics related to it that matter. I mean, if we think about it as like absolute, is it intelligent or not? Then it’s very hard. But I think we do have an understanding of what makes something more intelligent versus less intelligent. Even though we might not call this is an intelligence because it can play checkers, it’s still more intelligent when it’s able to play checkers than when it’s not. It’s still more intelligent, if let’s say, it can navigate an unknown building and find something in that building, than when it cannot. It’s more intelligent if it can acquire maybe the skill to play a new game it’s never seen before—you just present it with the rules and then figures out on its own how to play well. Which is essentially done by AlphaGo Zero, right? It was given the rules of the game but then just played itself to figure out how to play it maximally well. And so I think all of those things can definitely be seen as more intelligent if you can do them, than if you cannot do them.

READ ALSO  Trump 'racist baby' tweet labelled manipulated

So we have of course, narrow intelligence to use this construct which is an AI that we train to do one thing, and right now a technique we’re using that we’re having some success in is machine learning, a method which philosophically says “Let’s take data about the past and project it into the future.”

And then there’s this idea of a general intelligence which is somebody as versatile as you and [me], it’s what we see in the movies. Is it possible those two technologies have nothing in common at all? They share no code whatsoever, because there’s a vague sense that we get better and better at narrow and then it gets a little less narrow then you know it’s AlphaGo, then it’s AlphaGo Zero, then it’s AlphaGo Zero Plus and eventually it’s how? But is it possible they aren’t even related at all?

That’s a good question. I think, the thing about more specialized systems whether it’s in let’s say learning to play games or a robot learning to manipulate objects, which we do a lot of at Berkeley. I think often what we can get to succeed today tends to be somewhat narrow. And if a neural net was trained to play Go, that’s what it does; if it was trained to stack Lego blocks, that’s what it does. But I think at the same time, the techniques we tend to work on and by we I mean not just me and my students, but the entire community, we try to work on techniques where we have a sense that it would be more generally applicable than the domain we’re currently being able to achieve success in.

So for example we look at reinforcement learning and the underlying principles. We could look at individual successes which is where a neural net was trained through reinforcement learning for a very specific success, and of course neural nets are very specific to those domains, whereas games or robotics or another domain and within those domains [are] very specific and like the game of Go or Lego blocks stacking or peg insertion and so forth.

But I think the beauty still is that these ideas are quite general in that the same algorithm can then be run again, and the same algorithm can be run again to have a robot learn to maybe clean up a table, and so I think there is a level of generality ‘under the hood’ that’s doing the training of these neural nets even if the resulting neural net often ends up being a little specialized.

READ ALSO  Go to hell in 'Afterparty,' Night School follow to hit game 'Oxenfree'

However you know I just heard an interview you gave where you were talking about the case that if you gave a narrow AI a bunch of data about planetary motion, it could predict the next eclipse and the next one and the next million. But if all of a sudden a new moon appeared around Jupiter and you said “What’s that going to do to planetary motion?” it wouldn’t know because all it can do is take data about the past, make predictions about the future, and it isn’t that simple idea: take data about the past, make projections about the future not really the essence of what intelligence really is about?

Yeah. So what you’re getting at here is, to be fair, it’s not something that humans figured out very easily either. I mean it’s only when Newton came about [that] we started as humanity to understand that there is this thing called gravity and it has laws and it governs how planets and stars and so forth move around in space. And so it’s one of those things where, definitely right now I suspect if we just gave a massive neural network (without putting any prior information in there about what we already learned about how the world works), a bunch of data about planetary motion, it’s not very likely it would discover that.

I think it’s not unreasonable that that’s hard to do because I mean humans didn’t discover it ‘til very late either in terms of time of our civilization and it took a very kind of exceptional person at that time to figure it out. But I do think that those are the kind of things that are good motivators for the work we do because since sometimes what it points out to is, something that’s called Occam’s Razor, which says that the simplest explanation of the data is often the one that will generalize the best. Of course ‘define simple’ is not easy to do, but there is a general notion that the [fewer] equations you might need, the [fewer] variables might be involved, the simpler the explanation and so the more likely it would generalize to new situations.

And so I think laws of physics are kind of extreme. A really nice example of coming up with very very simple, low dimensional description of a very large range of phenomenons. Then yes. I don’t think neural nets have done that yet. I mean of course there’s work going in that direction, but often people will build in the assumptions and say “Oh it does better when it has the assumptions built in.” That’s not a bad thing to solve one problem but it’s not necessarily the way you have intelligence emerge in the sense that we might want it to emerge.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com