The 2008 Sidney Michaelson memorial lecture was delivered yesterday by Prof. Muffy Calder, as part of the Edinburgh Science Festival. I got free entrance to this event due to my BCS branch committee role. Prof. Calder did a good job of providing an accessible, yet substantial, account of some recent work that brings together ideas from theoretical computer science and experimental biological modeling. So, she described how model checking can be used to make statements about the behavior of signalling pathways, etc. As I understood it, one of the important aspects of this game is reachability (and sensitivity) analysis in specific parametric regimes dictated by the experimentalists.
The thing that has always puzzled me about this general business is the fact that a lot of this work is pitched in a rather dichotomous way – thou shalt pick your side: discrete or continuous modelling. I find this puzzling because, as far as I can tell, the limitations of these logic-based and state-space search methods (when applied to continuous systems with complex dynamics) appear to have quite a bit to do with the somewhat arbitrary ways in which the problem is cast in discrete form. I haven’t looked very closely at the recent systems biology literature, but I did actively participate in the hybrid systems community (which involved an interesting mix of theoretical CS model-checking types and continuous-time control and dynamical systems folks) a few years back and a lot of the proposed techniques in that world seemed to stumble on precisely this issue. My experience more or less convinced me that while it certainly makes sense to utilize the power of discrete symbolic representations, and all the resultant benefits in terms of TCS ideas, it also makes a lot of sense to think carefully about how best to arrive at the symbolic model.
Moreover, whenever I brought this up in conversations with purist TCS folks, I encountered a sociological barrier – CS folks do things algebraically while engineers do it with continuous math and that is all there is to say about that. Based on all the things I have learned in the past year or two, I don’t find this a terribly convincing argument. Computational (algebraic) topology is a perfect counterexample to this line of reasoning – thinking about the subtleties of dynamics doesn’t mean you have to be stuck to manual pen and paper analysis.
Even as I say this, I do realize that these computational tools are even more cumbersome to use and nascent than some of the model checking tools – which are at least very well established in the truly discrete setting of VLSI, network protocols and the like. Still, I look forward to seeing a more elegant framework for seamlessly integrating the two worlds – as opposed to the sort of dichotomy that we seem to have to endure today. And, on a practical level, I see this as a reason to take a more careful look at the whole issue of modelling paradigms.
Incidentally, there was an interesting thread of research that began (and mysteriously died, or at least fell dormant) in MIT’s Project MAC – I am thinking particularly of work by Ken Yip, Liz Bradley, Feng Zhao, and collaborators – which seemed to be headed in a very good direction. Perhaps, some of those ideas are worth reviving in more modern contexts!