I have been looking into sensing technology (especially the wearable kind) with increasing interest. This is in part because of some current work I am involved with (e.g., our papers at IPSN ’13, and ACM TECS ’13, which were really initial forays), but more broadly because many of us are becoming convinced that persistent interaction between (wo)man and computational machines is a defining theme of the next decade or two of technology, and sensors are the mediating entities.
I have also felt for a while now that most sensors, even – especially? – when they are called ‘smart’ seem quite dumb actually. For instance, there is a wealth of papers that talk about smart this and that when what they really mean is just what an AI or robotics person would call reactive. In this context, I found this article in the MIT Technology Review to be interesting. As the author says,
After trying some smart watches, I’ve determined that a good one will need to be more than just reliable and simple to use—it will have to learn when and how to bother me. This means figuring out what I’m doing, and judging what bits of information among countless e-mails, app updates, and other alerts are most pressing. And, naturally, it must look good.
I think this is more than just a product design issue. This is fairly challenging to achieve properly even from a research point of view – requiring learning of models of human choice behaviour, etc. There is also the issue of designing good sensors that can be deployed, e.g., see this article. Lots to do, but it’s the kind of thing that’ll be fun to do!