Robot learning: quality vs. quantity

There is this little question I often wonder about, typically triggered by some variant of the following statement that robot learning folks* throw in at the beginning of talks – “humans are very quick to learn new tasks whereas robots struggle, so we need new advances that make machine learning more spiffy …”

Is this really true?

Partly based on informal observations of my 3-year old daughter and partly based on more formal thinking about my past research, I feel that the real bottleneck is not speed. Indeed, I don’t think that humans are particularly efficient in learning a large number of skills, if time is the primary metric. In the case of many categories of general and ‘interesting’ tasks, humans and many animals are notoriously slow learners. My daughter ‘wasted’ a good portion of her first year just flailing about before she even managed to sit up straight**. Compare that to robots that ‘learn to walk in 20 minutes’! Similarly, it takes decades of schooling to produce a crop of students who are, on average, half decent at tasks that machine learning programs take no time at all (relatively) to pick up. So, what gives?

My view is that humans and animals spend a large portion of their early lives focussed on learning to make sense of a very complex world in which they wish to get by using a few general purpose devices (hands, legs, eyes, ears, …). If I may be permitted to take a certain methodological stance, I would say that they are preparing to play a very elaborate infinite state, imperfect and incomplete information game against an ever changing and cunning adversary. Their goal is to first use their experience to construct a set of representations that could, “at run time”, enable them to efficiently synthesize solutions to novel problems. So, the efficiency is of a special kind – ability to quickly synthesize robust solutions at run time, but not necessarily with very constrained learning prior to that. Similarly, once humans learn something – they have the ability to quickly adapt what they’ve learned to a changing world – their representations support that. So, the statement that is commonly made, comparing humans and robots, is a comparison of apples and oranges – quite meaningless. If you gave the machine learning program good representations then fairly primitive and crude learning techniques suffice. Otherwise, every incremental unit of efficiency improvement is hard-won in a very painful way. One could perhaps make similar arguments against some other of the quantitative metrics driving machine learning research, e.g., prediction accuracy. Humans are demonstrably bad at a whole variety of detailed prediction challenges but are still able to get by remarkably well in an overall sense. To paraphrase John Maynard Keynes: it is more important to do roughly the right kind of thing all the time than to do precisely the right thing sometimes (… and precisely the wrong thing the rest of the time).

In this vein, the most important challenge facing researchers of autonomous agents and intelligent robots (assuming they care about the old fashioned goals of AI) is that of identifying and inducing good robust (task-independent) representations, i.e., concisely summarizing experience in a way that makes sense for future experiences to come – making on-line task-specific decision making that much more reliable and efficient.

* Disclosure: I am sure I began some of my own talks as a PhD student in the same way…

** Counterpoint that the reader may offer: Some animals are very quick to learn special skills, e.g., some four legged animals in the Sahara can walk within minutes of birth.

Response: There are always going to be things that are ‘hard-coded’ by evolution, or coded in such a way that the module is good to go with very limited parameter tuning. This is in the same vein as admitting that one could avoid solving certain types of optimization problems by explicitly encoding the solution as a mechanical system and ‘simulating forward’. True, but this does not explain the general case which is what I am asking about. I doubt that there is any animal on earth that learned professional soccer skills in a few minutes!

*** Another common counterpoint: Humans can remember people’s appearance with just the most cursory of glimpses – well enough to help police track down vicious criminals. So, isn’t that a speed issue?

Response: Here again, infants are quite bad at the same task for a very large portion of their early years. The skill we talk about is not unlike the professional soccer example. Once a certain level of maturity has been obtained, it is not hard to learn a new variation – so there may well be a cascade effect. However, it seems plausible that the core of this edifice contains the same ingredients as above – good solid representations that can support a general class of tasks and, somewhat more importantly, efficient representation discovery mechanisms.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s