What might give rise to the cumbersome faculty we call intelligence?

Last week, I gave a talk as part of the Neuroinformatics research course which is aimed at incoming graduate students who are figuring out what to do. As this was a 2-hour talk that allowed me to slowly introduce essential ideas, I decided to spend some time on some conceptual issues before jumping into technical stuff. One such thread that finally came together (after refinements based on two earlier talks) was the following.

Why would an animal or robot need intelligence? The standard answer is that intelligence goes with autonomous behavior. However, even some fairly dumb systems can be “autonomous” – my colleague Michael uses the example of a minimalist robot that ‘survived’ in the wild for a year or two because it was nothing more than a motor attached to a solar panel! Well, that’s pathological! So, I present the example of the fly (for concreteness, consider the fruitfly – drosophila melanogaster, one of the most studied organisms in the world). It has a number of hard-wired – by evolution – neural circuits that allow it to behave in interesting ways. For instance, its visual sense of predator is based on a simple calculation of expansion in its optic field. This signal travels through a neural pathway to the halteres (a vestigial wing like organ) which are very good direction sensors. This then feeds directly into phase shifts in the wing – which otherwise rotates as a simple oscillator. So, as soon as a predator is sighted, a saccade is initiated and then the halteres are used to correct direction and complete the turn. The combination of all this is that you get a very agile predator-avoidance mechanism. And, in a sense, there is not much “intelligence” here – if by that term we mean something beyond standard closed-loop control.

I claim that the reason an animal would bother to adopt/evolve real “intelligence” is:

(a) to be able to go beyond hard-wired or slowly evolved morphological adaptations

(b) to achieve a combination of robustness, flexibility and efficiency. Robustness refers to the ability to function in a variety of environments. What does it mean to “function” – this is where flexibility comes in. I am using that phrase to mean the ability to carry out spatio-temporally extended behaviors (or, if you like, some sort of “complex dynamics” that can have many nontrivial variations). And finally, energy is a limited resource, so animals and robots need to conserve this.

Clearly, many behaviors admit simple explanations like in the case of the fly. However, equally clearly, I don’t have special circuitry to enable me to play football or the cello. I learn these, largely artificial, behaviors. And I learn to safely and reliably perform them under a variety of environmental conditions. And I eventually learn numerous variations on these behaviors, some of which are highly efficient with respect to energy, attention or other aesthetic criteria.

The ability to do this is why an animal, or a robot, needs intelligence. As a related issue (although one I will not dwell on today), this suggests a continuum from dumb feedback loops to smart problem solvers – so that, I claim, there need not be any essential magic in the ability to deal with more abstract problems. On a second related note, a student recently pointed me to an interesting hypothesis – that the cerebellum first evolved as a state estimator for computing predator/prey locations and figuring out how to use the body dynamics.

The next, somewhat contentious issue, is that of robustness. What is this elusive quantity? As someone who is originally trained as a control theorist, I am quite used to answering this question – although the technical aspects of the answer seem to require radical improvements in order to apply to situations common in robotics or AI. In a nutshell, a behavior is robust if it persists in the face of a family or set of environmental conditions. In the simplest setting – is a control loop stable for a whole family of plants? In a more complex setting – if you had a probability distribution over environments then can you guarantee something in the face of all of those possibilities? An interesting formulation of this question is in a paper by Carlson and Doyle where they use this idea to model a spatial process (spread of forest fires) – via a variational equation that yields a configuration of trees and empty spaces that maximizes the conserved area over a space of ignition events. They use this to argue that most complex systems are only able to achieve a “fragile sort of robustness” (they call this Highly Optimized Tolerance). This is an instance of a more generic strategy – to derive control strategies as the equilibria of dynamic games against nature.

The above arguments still need some more fleshing out but they represent a skeleton for an approach to planning and control strategy design that goes beyond many standard machine learning based methods by asking some under-appreciated questions about strategy design – questions that receive very little coverage in the function-approximation dominated ML setting, but are nonetheless crucial for the achievement of the stated goals.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s