Robot learning: why learn, really?

Following my previous post, on the topic of story lines used to motivate research into robot learning, I wonder about the validity of another meta-level assumption. Among a faction within the learning and robotics research community, it is commonly believed that robots must learn because `hand-coded’ strategies are simply bad – either because of the investment (time, manpower, etc.) or because things change.

Personally, I find the former argument rather weak, for two reasons. Firstly, when done properly and for a large number of problems that really matter, people who devise good strategies are drawing on a deep body of knowledge that yields very useful structures for the solutions being implemented. As I said in my previous post, the only way we know how to get learning to work well is by utilizing some of the same structure – albeit with a number of free knobs to tune the quantitative solution. Secondly, a surprisingly large number of people who try to apply learning to robotics do not seem to be paying sufficient attention to any portion of this structure in the problem they are trying to solve (perhaps this is because of the sociological distance between statistical and dynamical traditions).

Then, one of the most convincing reasons to use machine learning in robotics (or just adaptive systems, in general) is to come to terms with contingencies, in applications where they are too numerous to enumerate (otherwise, it is always `cheaper’ to just hard-code!)

However, this brings up my real question. Much of the statistical machine learning methodology does not directly address scenarios where contingencies are too numerous to be effectively enumerated (Savage’s `large world’, where one can only cross a bridge when one comes across it; or scenarios involving `Knightian uncertainty’). What is the agent supposed to do in these difficult situations?

Stated another way, it is one thing to replace analytical models with data-driven models but that alone does not make it clear how one must cope with the complexity of the global strategy – the original motivation for wanting to learn rather than hard-code*! One needs data to support good solutions in the face of such unknown unknowns. But, in practice, the only way the typical researcher gets good data is by anticipating nearly all these contingencies (so, the algorithm didn’t honestly solve the most crucial part of the problem).

I have not yet come across a convincing general answer to this question. Depending on one’s methodological slant, different compromises seem to make sense (e.g., if one’s true focus is on devising statistical techniques then this may seem like a moot point… “Google doesn’t really care about such problems”). However, like I said in my previous post, skirting such issues should count as bad form if one claims to care about the old fashioned goals of AI. This is one of the biggies…

I do have my own take on how this issue could be resolved. In the end, it seems like it will require a sensible compromise between GOFAI’s ‘knowledge’ viewpoint and the more modern active statistical learning viewpoint, with a liberal mix of ideas from domains that have thought carefully about dynamic interactions and strategy (e.g., game theory). Properly fleshing it out is a major undertaking, but one that seems like a lot of fun!

* As people with suitable mathematical maturity will quickly appreciate, just because one can write down an explicit model of some parts of the system does not mean one can anticipate all its implications, in an open-ended scenario. The issue is clearly even more acute if one were dealing with semi/non-parametric models learned from data!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s