Defining the Question

Different researchers have different ways of coming up with questions that drive their work, especially the big defining themes that organize the many little projects. For AI researchers, a popular approach is to think about various aspects of intelligence, as exhibited in the natural world, and to ask how they want to explain, understand, recreate parts of it. What follows is one installment of an ongoing attempt to sketch my chosen, biased and necessarily incomplete, view on this.

For me, the need for ‘intelligence‘ is closely tied to the need for clever variations in dealings with a hostile, arbitrarily complicated and infinite world. Autonomous agents (animals, people, robots, trading or bidding agents) must exist a world that is constantly changing, in ways that are not easy to enumerate and in a closed-loop setting where one can’t separate acquisition of experience, learning and decision making.

What does this mean in practice? I like to think of this in terms of a game, played out continually over the entire lifetime of an agent. An agent is born into an unknown world (endowed with varying levels of preliminary skill – ‘level zero’, i.e., no skill being a perfectly valid option) and finds itself able to play an arbitrary and infinite set of games – some are easy, some are hard, some are downright lethal in high doses. Any instance of a game must be solved using the agent’s bounded resources in a suitably game-dependent finite interval (after which, the world being non-stationary, the details of the game would have changed anyway). The only way for the agent to play the more complex and ‘dangerous’ games is to reuse its knowledge from simpler games. However, information about this setup is only available in an experience-based sense. The agent needs algorithmic procedures for figuring out how to navigate this world and incrementally learn to become more skilled.

All this sounds terribly abstract but I claim that it happens all the time. A favorite example I often quote to my students is the CO2 filter scene from the movie Apollo 13:

Here is another very relevant one – if you are impatient, skip to the final 15 seconds of the clip.

Yes, these are unusual scenarios in the sense that normal people may not be formally called upon to solve problems in this dramatic way. However, people do rise to challenges (see below). Also, yes, the specific instance of this problem may be posed as an instance of state space search which we all know, after chess, is an incomplete account of intelligent behavior. However, that is only true if I literally put a pile of stuff on the table and say find me a solution. Instead, if I asked more open ended questions with potentially infinite solutions, I am not just beating the familiar, possibly lifeless, horse.

So, notwithstanding some potential criticisms, the question remains – what kind of learning algorithms do we need that can incrementally learn from a lifetime of such encounters to one day solve problems in genuinely creative and original ways? Such a learning algorithm must surely be one of the more important parts of the final solution to the AI question.

To support my claim that such scenarios are not all that unusual, I offer the following pictures of my daughter as I found her this morning when I woke up:

Apparently, she had come to the conclusion that she and her teddy bear were in need of a tent. We don’t actually have one easily available in our living room, so she set about making one. She put some of her chairs together, connected them together with a towel for a roof, lined up some cushions for a wall and even set up two other cushions as doors through which they could crawl in and out.

Admittedly, this is not rocket science. Lots of kids, and most adults, can do this with ‘limited’ cognitive effort. However, there is no denying the fact that she did something involving some open-ended ‘reasoning’ about an arbitrary, large collections of objects and what they could potentially do – outside of the normal circumstances where they are often found. Anyone who has played ‘pretend’ games with little kids knows how all this works. I claim that these behaviors are the ‘drosophila and yeast’ of our area under discussion, and getting agents to do this well is a very worthwhile and nontrivial exercise.

At some level, these are not entirely original thoughts. This is the developmental approach to AI and many researchers work in this area. However, as yet, there are no generic techniques for learning like this in general domains. A lot of very good work is done by replicating early infancy (right down to rattles and brightly colored toys) and trying to get algorithms to do what cognitive psychologists tell us kids are able to do. That is excellent. But, I am sceptical of being able to directly scale up from this to the Apollo 13 scenarios, entirely on the shoulders of the behavioral theories. At the same time, it is worth noting that very little of the current thrust of developments in either the statistical machine learning or logic-based learning tools is directly focused on addressing this issue. So, I would very much like to see (hopefully, develop myself) an algorithmically principled approach to this type of learning. This is my big theme – from which many of the little questions for my individual papers are derived (and, in broad terms, my chosen application domains emphasize the importance of this kind of learning and thinking).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s