Sensors on surfers

This video from Red Bull Sports describes a very interesting array of technology, sensing modalities to be precise, applied to the problem of better understanding the athletes’ physiology and performance:

http://www.redbull.com/us/en/surfing/stories/1331686821979/this-is-what-happens-when-scientist-go-surfing

Jake MArshall - Action - Surf Science

© Seth de Roulet/Red Bull Content Pool

Many of these sensors are becoming increasingly common. Certainly, in robotics labs like mine, we are routinely using motion tracking technology of various kinds, connecting tracked traces to motion analysis and so on. The way the Red Bull team have used pressure sensing footwear and the way they have setup the UAV to do personal tracking is handled very nicely.

Beyond that, although we are also beginning to play with eye tracking and EEG, I have not yet seen someone apply it in such a physically demanding environment. I’d love to know more about how the technology was actually deployed – we are keen to similar things in our own applications. Of course, the video says little about what was actually obtained from the analysis, e.g., the EEG sensor is only able to get a very coarse frequency domain measurement about the alpha-waves, it appears. Are they actually able to get meaningful interpretations from this sensor? For that matter, are they able to get genuinely useful insights from the SMI eye tracker?

Cool stuff!

Advertisements

Smart watches are still pretty dumb

I have been looking into sensing technology (especially the wearable kind) with increasing interest. This is in part because of some current work I am involved with (e.g., our papers at IPSN ’13, and ACM TECS ’13, which were really initial forays), but more broadly because many of us are becoming convinced that persistent interaction between (wo)man and computational machines is a defining theme of the next decade or two of technology, and sensors are the mediating entities.

I have also felt for a while now that most sensors, even – especially? – when they are called ‘smart’ seem quite dumb actually. For instance, there is a wealth of papers that talk about smart this and that when what they really mean is just what an AI or robotics person would call reactive. In this context, I found this article in the MIT Technology Review to be interesting. As the author says,

After trying some smart watches, I’ve determined that a good one will need to be more than just reliable and simple to use—it will have to learn when and how to bother me. This means figuring out what I’m doing, and judging what bits of information among countless e-mails, app updates, and other alerts are most pressing. And, naturally, it must look good.

I think this is more than just a product design issue. This is fairly challenging to achieve properly even from a research point of view – requiring learning of models of human choice behaviour, etc. There is also the issue of designing good sensors that can be deployed, e.g., see this article. Lots to do, but it’s the kind of thing that’ll be fun to do!

NYT article on Kinect

As someone who only uses the Kinect for research, in robotics and human-robot interaction, and doesn’t own Kinect/XBox at home, I find this article very interesting reading:

http://www.nytimes.com/2012/06/03/magazine/how-kinect-spawned-a-commercial-ecosystem.html?smid=pl-share

I personally believe this technology – not just the hardware product but the possibilities created by this paradigm, including machine learning and vision advances – has the potential to do for Microsoft what the ipod did for Apple. The question, of course, is whether the company is going to be able to realize this potential!

Geek meets chic

I liked this video showing how Radiohead’s House of Cards music video was made:

Many of us roboticists have seen these Velodyne sensors, and their early cousins – the SICK laser scanners, that can generate dense range maps (my colleagues at UT-Austin had one of these as part of their entry to the DARPA Urban Challenge), and some of us tend to take such data for granted. So, it is worth being reminded that such data, used deftly and with an artistic eye, can open whole new doors!

PS: Thanks to my colleague, Mark Wright, for forwarding this.

Do really complex sensors lead to qualitatively different insights?

I just returned from an excellent Christmas dinner for international faculty, hosted by the Principal (i.e., Chancellor) of  my University. The food and venue (three doors down from where R.L. Stevenson grew up and came up with the idea of Treasure Island) were great, but what I enjoyed most was meeting people from different academic backgrounds. Among others, I had an interesting conversation with a physician who specializes in skin pathology. He mentioned that the human (or, for that matter, animal) skin is not just a very high-dimensional sensor but also a very dynamic one – reacting and regenerating at a variety of time scales ranging from microseconds to days. Moreover, it is equipped to sense everything from temperature to air/fluid flow and texture. This got me thinking about what would happen if a robot had access to this level of sensing – would the result be a qualitatively significant leap in skill?

To take one particular aspect of what might be different – the standard procedure for making use of a mix of high-dimensional sensors is to first process the data and generate a set of low-dimensional variables that are then fused using various probabilistic methods. For instance, if one has a camera and a microphone, one might distill the information (using low-level routines ranging from simple things like edges and blobs to more complex object finding methods) down to “visual location”, “auditory location”, etc. followed by some form of (Bayesian?) sensor fusion that is tractable in reasonably low dimensions. Now, if these modalities were not separate modules but instead inter-mixed in one and the same sensor – with tight field-like correlations across a large number of units, then should the problem still be solved in the same way? To my knowledge, standard sensor fusion methods might have difficulty performing in anything close to real time under this setting. Nonetheless, what would we get with this sort of truly high-dimensional multi-modal sensing?

I have heard a lot about the “general” answer to this question, e.g., a famous Science paper by Anderson from the 70s entitled More is Better. I believe that. However, I am curious about what specifically is gained – is there any cognitively useful thing that can be gleaned from this form of sensing that can’t be obtained with less?