It has been a little while since my previous post. In the interim, I have been busy with a variety of things ranging from a conference in Alaska to moving into a new apartment, with things like proposal writing thrown into the mix. The mix has got me thinking about some old but important questions regarding the famous theory-practice divide, but perhaps with a new twist.
So, the world has been an interesting place over the past few months. Volcanoes have seriously impacted a substantial fraction of air-travelling mankind. In turn, man has gotten back at nature by unleashing one of the most catastrophic oil spills of all time. In both cases, I have been astounded by the utter lack of precise information on which decision making is based. The aviation authorities in Europe were literally pulling numbers out of hats to set ‘safe ash levels’. Until recently, perhaps even now, we have a remarkably poor idea of how much oil has been spilled in the Gulf of Mexico, precisely where and what the effects are.
The astonishing thing is that if one asks a researcher, deeply embedded in the ivory towers, about the technical problem of collecting such information, one would hear that these are ‘solved problems’. All one has to do in order to understand the ash cloud patterns is to send a few planes with good sensors in order to get an idea of the spatiotemporal patterns. If one had a certain amount of data, ‘it can’t be that hard’ to begin to model the actual conditions. If one gets that far, ‘it can’t be too much of a stretch’ to make a guess about safe levels of ash and safe flying paths, etc. Indeed, RyanAir finally announced, after a seemingly interminable stretch of uncertainty, that they were going to equip all their airplanes with lidar-like sensors. Why the delay? Wht didn’t the government agencies deploy something like this the day after the eruption, especially considering how many billions were at stake?! Same story with the Gulf. Why are we still guessing about the numbers involved? Surely, by now, it makes sense to deploy a swarm of mobile sensors and connect them up with modelling software in order to get a firmer idea?
Unlike on many previous occasions, my questions above are not just rhetorical. I am genuinely curious about what (apart from bureaucratic intransigence) prevents such technologies from being deployed? Could it be the case that only a very tiny fraction of people with the relevant knowhow have ever bothered about these application scenarios in practice?
As I ask these questions, I also note that such questions rarely appear at the forefront of many academics’ list of big problems. As robotics researchers, do we believe that we have the theoretical and empirical support necessary to deploy our systems in such live applications – will they actually function robustly, enough for policy makers to take the outputs seriously?
I for one think that people like me, especially in our professional roles as researchers, should take these challenges seriously, not just because they give rise to fantastic and fundamental scientific challenges but also because these are great opportunities for us to situate our work in context – explaining why we matter to society at large. I hope others who are smarter/more influential/better connected than me feel the same way!