I am off to a workshop on this topic, in (hopefully, a bit sunny) Madrid: http://webpages.ull.es/users/gmcnet/GTMCR2010/Home.html.
Over the past few years, there has been a slow build up of interest in this topic. I am especially excited to see algorithmic versions of classical mathematical ideas slowly take shape – to the point where non-mathematicians can think about picking them up. I believe that some of these tools will play a key role in problems such as representation discovery in machine learning.
I will be giving a short communications talk on Tom Larkworthy and my work on motion planning for self-reconfigurable robots, visualized here: http://sites.google.com/site/tomlarkworthy/.
I am off to a Dagstuhl workshop entitled Learning paradigms in dynamic environments. The short summary of some of the main goals of this workshop:
Two problem areas which focus on the problem complex and can be fruitfully addressed within the limited time span of a seminar are the autonomous development and learning of structures and modularity, and the investigation of implicit biases, priors and problem shaping in natural problems which allow humans to efficiently tackle dynamic problems.
If you’ve followed some of my earlier technical posts, you will know that I consider the problem of robust autonomy in a truly dynamic environment to be one of the big issues within AI. So, I am quite excited to hear what folks have to say on this topic. I will also eventually post a link to the slides associated with my own talk at this meeting.
My friend and former lab-mate, Shilpa Gulati, is returning to Antractica for a second mission involving robotic exploration. She is maintaining a blog at: http://sgulati.wordpress.com/.
I look forward to reading about their exploits…
During my recent trip to Munich, I got the chance to visit the recently opened BMW Welt and Museum. This is a spectacular and very impressive building, e.g., see this picture:
I got to admire and occasionally sit on a whole variety of vehicles ranging from the M6 roadster to several performance motorcycles. Moreover, the museum had some very cool displays – everything from James Bond’s Z4 to a concept model of the Z9. It was also really interesting to see some of the oldies – an early motorbike that completed the Paris-Dakar rally, engines as they have evolved from just after WWII to now, the BMW formula one car… lots of history, very elegantly presented and explained.
In the BMW Welt area, I saw the hydrogen car on display. The lady from BMW told me that it will still be at least a decade before something like it hits the streets. Still, they were serious enough about the idea to display it in their main showroom! Based on that display, and the conversation with this lady who explained it, it became quite clear that the success of these sorts of ideas depends so critically on external factors such as availability of hydrogen refuelling stations in remote places, safety and customer-friendliness of the systems required to store such materials in the pumping stations and on the vehicle, the econo-politics of the supply chain behind this fuel and the more conventional fossil fuels, etc. etc. This is way more than ‘just’ an engineering challenge. Compared to all this, even a company like BMW is just a small player!
All in all, a rainy day well spent.
I have been busy traveling, going between the extremes of sunny Rome and chilly Minneapolis.
Last week, I attended an EUCognition meeting on ‘social cognition’ – with many talks addressing the general area of language learning and its use in communication between cognitive agents. The speakers included Stevan Harnad, Luc Steels, Michael Arbib and Jordan Pollack. So, quite apart from the location in Venice, the event was highly informative. I was particularly intrigued by Arbib’s description of constructive grammars and the issue of how new content (e.g., a new counting system) makes its way into language and turns into efficient communication on that subject. I wonder if this can also help explain some questions I brought up in earlier posts, about how dynamically dexterous behaviours are acquired, stored and improved.
This week, I am attending a workshop on the topic of protein folding – at the Institute for Mathematics and its Applications, in Minneapolis. Despite appearances to the contrary, there are many common threads between the problems of modeling and computing about robotic and protein motions. For instance, there were a couple of speakers who mentioned the use of machine learning tools that would be familiar to a roboticist – use of manifold learning algorithms to infer good ‘low-dimensional representations’ that help speed-up computation, use of a database to induce empirical energy landscapes, etc. However, the thing that strikes me most about this area is the immense complexity and intricate hierarchy of the system under study – many complex engineering systems seem relatively simple in comparison to the actual complexity of biological systems. Lots more to be done!
I had an enjoyable visit to Goettingen (Stadt der Wissenschaft, or City of Science) last week, where I was visiting the Max Planck Institute for Dynamics and Selforganisation. I had interesting discussions with people there and I even had the chance to walk around and see the city. Among other nerdy things, I visited the memorial of Gauss (my academic ancestor).
On the technical side, I learned (or more accurately, was reminded in substantial detail) about dynamical approaches to learning and intelligence. These approaches are based on the view that much of what we attribute to intelligence may be explained in terms of phenomena that arise in nonlinear dynamics and “complex systems” theory. So, for instance, I learned about notions such as “unstable attractors”, a seemingly paradoxical phrase but one that explains how fleeting structures can arise in an otherwise deterministic process.
Most substantially, I finally understood the core ideas behind the whole business of self-organization principles. If one were to begin with a dynamical system, induce it to become weakly chaotic so that it can explore a subset of its phase space effectively and then constrain the system to stick to regions where it has some notion of predictability of behavior – then the system will automatically hone in on a variety of attractors (stable or intermittent) that correspond to interesting behaviors. One can define processes for generating such weak chaos and for selecting actions to improve predictability without committing to any specific task. Then – if one does this right – one can extend this whole notion to high-dimensional systems, where the criticality of the behavior also yields scale-free search of the space of possible behaviors. I find this really neat because this answers one of the questions I have been pondering for a while; and provides what I consider to be a better answer than random or evolutionary search.
Of course, while this concept is a nice approach to bottom-up emergence of behaviors, it does not quite explain how more sophisticated and hierarchical planned behaviors come about. Nonetheless, it was good (and very timely) for me to learn about these bottom-up notions – I can now go explore how this ties in with other top-down notions that I have already been studying. In the end, I am fairly non-partisan in that I believe the real answer is most likely in between the two extremes that people sometimes gravitate towards.