It is often the case that one of the crucial factors underpinning wide scale adoption of any computational procedure is the availability of easy to use packages containing that procedure, usable by people who have only a passing familiarity with the innards.
Topological data analysis is one such toolbox, the inner workings of which have often seemed too hard to penetrate for anyone but the most mathematically well trained scientists (a very small population, indeed). Yet, the tool has many appealing characteristics, given how centrally relevant its core questions are to hierarchical representations, multi-scale modelling and so on.
In this context, it is nice to see this web based interface for persistent homology calculations, a key tool in the TDA toolbox:
I am off to a workshop on this topic, in (hopefully, a bit sunny) Madrid: http://webpages.ull.es/users/gmcnet/GTMCR2010/Home.html.
Over the past few years, there has been a slow build up of interest in this topic. I am especially excited to see algorithmic versions of classical mathematical ideas slowly take shape – to the point where non-mathematicians can think about picking them up. I believe that some of these tools will play a key role in problems such as representation discovery in machine learning.
I will be giving a short communications talk on Tom Larkworthy and my work on motion planning for self-reconfigurable robots, visualized here: http://sites.google.com/site/tomlarkworthy/.
Taku Komura, Sethu Vijayakumar and I have secured an EPSRC grant on this topic. The project abstract is available here: http://gow.epsrc.ac.uk/ViewGrant.aspx?GrantRef=EP/H012338/1.
Through an internal mailing list, I came across this really interesting set of movies. Apparently, most of them are aimed at high school or early college students but I am sure more experienced folks would benefit from them as well.
Following a link from the Low Dimensional Topology blog, I found this really interesting tidbit of information about a highway interchange in the US that has nontrivial structure. This particular interchange combines the American clover leaf pattern resulting from the standard right turn along a loop with the ‘British pattern’ of left turns. And the two sets are combined together in an interesting braided structure!
I wonder if this is just an accident of incremental design or if all architects sit down and think up such elegant structures. In any case, it sounds like this structure is going to be eliminated due to an incremental change to improve efficiency of traffic flows. That’s a shame!
Dr. Benjamin Mann, a program manager at DARPA, outlines 23 mathematical challenges for this century. Many of them are fairly well known problems that would appear in any such list. One challenge question that may not have been so easy to guess (although I fully support its selection) is “What new scalable mathematics is needed to replace the traditional PDE approach to differential games?“. There is also a passing mention of the importance of repeated games in Steve Smale’s list, under the heading of limits of intelligence – which I think is appropriate because whatever innovations come out of solving the generalized differential games challenge will also have a lot to do with the problem of creating an intelligent autonomous system.
Although they may sound quaint and obscure, differential games have a lot to do with many problems of current interest. One of the holy grails of robotics is an autonomous system that can reliably perform complex dynamical tasks in a hostile environment – a game against nature involving dynamics. At a conceptual level, we do understand many things about how to achieve this goal – the theory and practice of reinforcement learning (which, at the core, utilizes a stochastic and data-driven variant of the classical Hamilton-Jacobi-Bellman PDE) being a good example. However, the limitations of these general approaches have a lot to do with the limitations that Dr. Mann refers to – the computational complexity is unbearable in realistic settings. The best methods for the existing PDE based approaches to differential games involve innovative numerical techniques – multi-resolution grids, fast marching methods, etc. Still, these are all rather prohibitive for real time, high dimensional applications.
A direction that I find promising (as I have mentioned before) is that of using topological ideas to focus the algorithms, e.g., as in the SToMP project. There are some useful nuggets in recent publications by researchers who are part of this project. I am sufficiently intrigued that I am going to work on some aspects of this general problem myself. I hope to report back with more concrete and interesting results one of these days…
The 2008 Sidney Michaelson memorial lecture was delivered yesterday by Prof. Muffy Calder, as part of the Edinburgh Science Festival. I got free entrance to this event due to my BCS branch committee role. Prof. Calder did a good job of providing an accessible, yet substantial, account of some recent work that brings together ideas from theoretical computer science and experimental biological modeling. So, she described how model checking can be used to make statements about the behavior of signalling pathways, etc. As I understood it, one of the important aspects of this game is reachability (and sensitivity) analysis in specific parametric regimes dictated by the experimentalists.
The thing that has always puzzled me about this general business is the fact that a lot of this work is pitched in a rather dichotomous way – thou shalt pick your side: discrete or continuous modelling. I find this puzzling because, as far as I can tell, the limitations of these logic-based and state-space search methods (when applied to continuous systems with complex dynamics) appear to have quite a bit to do with the somewhat arbitrary ways in which the problem is cast in discrete form. I haven’t looked very closely at the recent systems biology literature, but I did actively participate in the hybrid systems community (which involved an interesting mix of theoretical CS model-checking types and continuous-time control and dynamical systems folks) a few years back and a lot of the proposed techniques in that world seemed to stumble on precisely this issue. My experience more or less convinced me that while it certainly makes sense to utilize the power of discrete symbolic representations, and all the resultant benefits in terms of TCS ideas, it also makes a lot of sense to think carefully about how best to arrive at the symbolic model.
Moreover, whenever I brought this up in conversations with purist TCS folks, I encountered a sociological barrier – CS folks do things algebraically while engineers do it with continuous math and that is all there is to say about that. Based on all the things I have learned in the past year or two, I don’t find this a terribly convincing argument. Computational (algebraic) topology is a perfect counterexample to this line of reasoning – thinking about the subtleties of dynamics doesn’t mean you have to be stuck to manual pen and paper analysis.
Even as I say this, I do realize that these computational tools are even more cumbersome to use and nascent than some of the model checking tools – which are at least very well established in the truly discrete setting of VLSI, network protocols and the like. Still, I look forward to seeing a more elegant framework for seamlessly integrating the two worlds – as opposed to the sort of dichotomy that we seem to have to endure today. And, on a practical level, I see this as a reason to take a more careful look at the whole issue of modelling paradigms.
Incidentally, there was an interesting thread of research that began (and mysteriously died, or at least fell dormant) in MIT’s Project MAC – I am thinking particularly of work by Ken Yip, Liz Bradley, Feng Zhao, and collaborators – which seemed to be headed in a very good direction. Perhaps, some of those ideas are worth reviving in more modern contexts!