Do really complex sensors lead to qualitatively different insights?

I just returned from an excellent Christmas dinner for international faculty, hosted by the Principal (i.e., Chancellor) of  my University. The food and venue (three doors down from where R.L. Stevenson grew up and came up with the idea of Treasure Island) were great, but what I enjoyed most was meeting people from different academic backgrounds. Among others, I had an interesting conversation with a physician who specializes in skin pathology. He mentioned that the human (or, for that matter, animal) skin is not just a very high-dimensional sensor but also a very dynamic one – reacting and regenerating at a variety of time scales ranging from microseconds to days. Moreover, it is equipped to sense everything from temperature to air/fluid flow and texture. This got me thinking about what would happen if a robot had access to this level of sensing – would the result be a qualitatively significant leap in skill?

To take one particular aspect of what might be different – the standard procedure for making use of a mix of high-dimensional sensors is to first process the data and generate a set of low-dimensional variables that are then fused using various probabilistic methods. For instance, if one has a camera and a microphone, one might distill the information (using low-level routines ranging from simple things like edges and blobs to more complex object finding methods) down to “visual location”, “auditory location”, etc. followed by some form of (Bayesian?) sensor fusion that is tractable in reasonably low dimensions. Now, if these modalities were not separate modules but instead inter-mixed in one and the same sensor – with tight field-like correlations across a large number of units, then should the problem still be solved in the same way? To my knowledge, standard sensor fusion methods might have difficulty performing in anything close to real time under this setting. Nonetheless, what would we get with this sort of truly high-dimensional multi-modal sensing?

I have heard a lot about the “general” answer to this question, e.g., a famous Science paper by Anderson from the 70s entitled More is Better. I believe that. However, I am curious about what specifically is gained – is there any cognitively useful thing that can be gleaned from this form of sensing that can’t be obtained with less?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s