I have had a number of conversations in the past few months where this question has come up. Typically, the question is whether biological systems – ranging from cellular biochemistry to cognitive processes – are actually modular or simply assumed to be so due to the background and inclination of the person studying it. So, I have spoken to a number of people coming from the classical physics tradition who are skeptical of any inherent modularity – while computer science and engineering folks are much more comfortable with that notion.
Over lunch today, I had a discussion on this topic with my colleague, Michael, who was of the view that maybe “planning” as understood by CS types is not necessary in biology (e.g., the standard example – do insects use a robot-like hierarchical navigation strategy when performing long range reconnaissance?). As it happened, I returned to my office after lunch and went through a couple of papers by Leslie Valiant that I had been meaning to read. One of them, titled Evolvability, makes a very interesting point on the need for modularity.
Valiant proposes a definition of evolvability as a modified version of learnability – where he takes into account the need to learn efficiently with a finite representation and where learning is based on the ensemble average of performance over a trial population. It then turns out that evolvability is a lot like learnability from statistical queries (a problem studied earlier in computational learning theory) – but there are many learnable concepts that are not evolvable as defined. For instance, getting the concept of odd-parity is hard (which seems to match intuition about the biological unnaturalness of that concept).
And then the point follows that if there are limits on the function classes that can be efficiently generated by this process, subject to the evolvability requirement, then complex function would not naturally appear in its direct form but via smaller and identifiable modules that implement crucial pieces of the overall function.
You should read Valiant’s paper if you find this idea intriguing – he makes things much more precise and explains well.
This argument is more formal version of a point I have often tried to make: even if generic learning mechanisms like RL (and other optimal control variants deriving from the HJB equation) are – in principle – capable of describing various types of complex behaviors, it may be far from sufficient to achieve robust global behaviors from limited training/learning – which will require modularity and some specialization in representations.