Geometric and topological methods in control and robotics

I am off to a workshop on this topic, in (hopefully, a bit sunny) Madrid: http://webpages.ull.es/users/gmcnet/GTMCR2010/Home.html.

Over the past few years, there has been a slow build up of interest in this topic. I am especially excited to see algorithmic versions of classical mathematical ideas slowly take shape – to the point where non-mathematicians can think about picking them up. I believe that some of these tools will play a key role in problems such as representation discovery in machine learning.

I will be giving a short communications talk on Tom Larkworthy and my work on motion planning for self-reconfigurable robots, visualized here: http://sites.google.com/site/tomlarkworthy/.

Advertisements

Hijacking brains

I came across this article and video via the IEEE Spectrum. As the article notes:

Berkeley scientists appear to have demonstrated an impressive degree of control over their insect’s flight; they report being able to use an implant for neural stimulation of the beetle’s brain to start, stop, and control the insect in flight. They could even command turns by stimulating the basalar muscles.

The abstract (and some supporting material) for the corresponding scientific article is available here: http://www.frontiersin.org/integrativeneuroscience/paper/10.3389/neuro.07/024.2009/

This is impressive!

What I would be even more curious to see, when someone gets that far, is modulation of non-trivial choice behaviour (preferably, in the presence of controlled adversaries). One of the benefits of such experiments, presumably one of the reasons such work gets funded (apart from obvious military uses that I am not so excited about), is the demonstration that we really do understand some aspects of the behaviour – well enough to carefully tweak it. So, if we could understand human decision-making behaviours in similar ways…

Uncertainty and non-determinism

What is the difference between uncertainty and non-determinism? I am beginning to find that many graduate students, and even many researchers, I come across do not really distinguish between the two very much. In fact, even within the first class, what is the difference between probabilistic uncertainty and, say, set-membership description of uncertainty?

Now, if I ask the question properly, it becomes clear that there are differences and it is not that hard to guess what they are from the words used. However, these things have important influences on what they mean for various techniques, e.g., design of control strategies. If what you are dealing with is probabilistic uncertainty that can be modeled and statistically estimated then one could design control strategies in a way that is not all that different from the procedure for deterministic noise-free systems. The “right” procedure might be a lot harder for set-membership uncertainty. For more general forms of non-determinism, the concerns and design procedures tend to be even more different from the case of “noisy systems”.

So what? Well, for one thing, we have very little work in the robot learning or control learning areas which is able to do anything beyond the data-driven equivalent of stochastic control in the presence of “small noise”. Currently, non-determinism is really only handled by “re-planning” but we don’t have very efficient methods for that either, and even then this is built as a hierarchy of non-interacting black boxes. So, this area needs more work – which begins with more awareness of the underlying issue!

PS: I recently came across this interesting paper that is loosely related to what I say above but has a different focus and motivation: Ken Binmore, Rational Decisions in Large Worlds, Annales d’Économie et de Statistique, No. 86, pp. 25-41, 2007 (from the looks of it, this is a short version of a book).

Bailouts… theory and practice.

I just visited Ben Bernanke’s academic website at Princeton, after H sent me a link to an exchange between him and Ron Paul.

On the website, he has a paper entitled “Should Central Banks Respond to Movements in Asset Prices?”, which argues that “stochastic simulations show little gain from an independent response to stock prices by the central bank”. More precisely:

… inflation targeting has generally performed well in practice. However, so far this approach has not often been stress-tested by large swings in asset prices. Our earlier research employed simulations of a small, calibrated macro model to examine how an inflation-targeting policy, defined as one in which the central bank’s instrument interest rate responds primarily to changes in expected inflation, might fare in the face of a boom-and-bust cycle in asset prices. We found that an aggressive inflation-targeting policy rule (in our simulations, one in which the coefficient relating the instrument interest rate to expected inflation is 2.0) substantially stabilizes both output and inflation in scenarios in which a bubble in stock prices develops and then collapses, as well as in scenarios in which technology shocks drive stock prices.

… In the spirit of recent work on robust control, the exercises in our earlier paper analyzed the performance of policy rules in worst-case scenarios, rather than on average.  However, the more conventional approach to policy evaluation is to assess the expected loss for alternative policy rules with respect to the entire probability distribution of economic shocks, not just the most unfavorable outcomes.  That is the approach taken in the present article.


The past few weeks must surely count as the most unfavorable outcome mentioned above. I don’t mean to cast aspersions on the ability of the people concerned, but this just underlines how little we understand about the concept of robustness and what to do about it. For instance, Mandelbrot has spent a lifetime exhorting people to believe in the fat tails, and I think people do, but we still lack a firm footing for getting simulations of such things to mirror the realities that matter!

Indeed, we live in most interesting times!

An Interesting Modelling Problem – Magnetic Equilibria

I have just returned from a visit to the Max Planck Institute for Plasma Physics. The purpose of my visit was to explore the possibility of using some recent advances in nonlinear dimensionality reduction and related modelling to the problem of regime identification in tokamaks.

The core issue is that of using magnetic measurements at various points on the vacuum vessel and some additional measurements, such as from Motional Stark Effect, inside the vacuum vessel to identify the plasma’s flux profile, boundary and essentially the equilibrium configuration it is close to. All of this is defined by an analytical model called the Grad Shafranov PDE but accurately solving it is quite intensive and requires a distibuted computing environment, i.e., a reasonably large cluster. So, what the diagnostics and control people actually do is collect a database of equilibria from off-line computations, fit a simpler statistical model and then use that in the experimental system. Currently, this statistical model is a regression model that is something like a ‘quadratic’ PCA model (called function parameterization).

I got interested in this issue during my final few months at National Instruments, during a visit by one of the ASDEX Upgrade scientists involved with diagnostics. He is currently involved in setting up a closed loop system that will use all these methods to identify the regime, locate possible instabilities (e.g., turbulence that cools down the plasma) and ‘shoot them down’ using steerable mirrors that focus a beam that locally heats up the plasma – as in this figure:

My hypothesis is that NLDR algorithms may enable models that achieve at least the current levels of accuracy with reduced levels of computation, and perhaps even pick up on ‘quirks’ that matter for real time control. However, all this is still a bit speculative. For one thing, it may turn out that the function parameterization statistical procedure currently in use for the ASDEX Upgrade is capturing all there is to capture. Moreover, over time, it will surely become possible to solve the PDEs in real time and this extra step becomes redundant.

However, despite all this, the thing that intrigues me about this problem is this (somewhat orthogonal to the practical goal, but reason enough for me to spend a bit of time on it) – do techniques like NLDR actually recover abstractions that correspond directly to the essence of the problem in the way that the physical models are understood, e.g., if we recovered principal directions on a submanifold describing all magnetic equilibria in the database and built models using them then does the result directly correspond to analytical solutions of the Grad Shafranov PDE? If this happens, I would be very pleased indeed and it would say something important about the properties of these learning algorithms!

What might give rise to the cumbersome faculty we call intelligence?

Last week, I gave a talk as part of the Neuroinformatics research course which is aimed at incoming graduate students who are figuring out what to do. As this was a 2-hour talk that allowed me to slowly introduce essential ideas, I decided to spend some time on some conceptual issues before jumping into technical stuff. One such thread that finally came together (after refinements based on two earlier talks) was the following.

Why would an animal or robot need intelligence? The standard answer is that intelligence goes with autonomous behavior. However, even some fairly dumb systems can be “autonomous” – my colleague Michael uses the example of a minimalist robot that ‘survived’ in the wild for a year or two because it was nothing more than a motor attached to a solar panel! Well, that’s pathological! So, I present the example of the fly (for concreteness, consider the fruitfly – drosophila melanogaster, one of the most studied organisms in the world). It has a number of hard-wired – by evolution – neural circuits that allow it to behave in interesting ways. For instance, its visual sense of predator is based on a simple calculation of expansion in its optic field. This signal travels through a neural pathway to the halteres (a vestigial wing like organ) which are very good direction sensors. This then feeds directly into phase shifts in the wing – which otherwise rotates as a simple oscillator. So, as soon as a predator is sighted, a saccade is initiated and then the halteres are used to correct direction and complete the turn. The combination of all this is that you get a very agile predator-avoidance mechanism. And, in a sense, there is not much “intelligence” here – if by that term we mean something beyond standard closed-loop control.

I claim that the reason an animal would bother to adopt/evolve real “intelligence” is:

(a) to be able to go beyond hard-wired or slowly evolved morphological adaptations

(b) to achieve a combination of robustness, flexibility and efficiency. Robustness refers to the ability to function in a variety of environments. What does it mean to “function” – this is where flexibility comes in. I am using that phrase to mean the ability to carry out spatio-temporally extended behaviors (or, if you like, some sort of “complex dynamics” that can have many nontrivial variations). And finally, energy is a limited resource, so animals and robots need to conserve this.

Clearly, many behaviors admit simple explanations like in the case of the fly. However, equally clearly, I don’t have special circuitry to enable me to play football or the cello. I learn these, largely artificial, behaviors. And I learn to safely and reliably perform them under a variety of environmental conditions. And I eventually learn numerous variations on these behaviors, some of which are highly efficient with respect to energy, attention or other aesthetic criteria.

The ability to do this is why an animal, or a robot, needs intelligence. As a related issue (although one I will not dwell on today), this suggests a continuum from dumb feedback loops to smart problem solvers – so that, I claim, there need not be any essential magic in the ability to deal with more abstract problems. On a second related note, a student recently pointed me to an interesting hypothesis – that the cerebellum first evolved as a state estimator for computing predator/prey locations and figuring out how to use the body dynamics.

The next, somewhat contentious issue, is that of robustness. What is this elusive quantity? As someone who is originally trained as a control theorist, I am quite used to answering this question – although the technical aspects of the answer seem to require radical improvements in order to apply to situations common in robotics or AI. In a nutshell, a behavior is robust if it persists in the face of a family or set of environmental conditions. In the simplest setting – is a control loop stable for a whole family of plants? In a more complex setting – if you had a probability distribution over environments then can you guarantee something in the face of all of those possibilities? An interesting formulation of this question is in a paper by Carlson and Doyle where they use this idea to model a spatial process (spread of forest fires) – via a variational equation that yields a configuration of trees and empty spaces that maximizes the conserved area over a space of ignition events. They use this to argue that most complex systems are only able to achieve a “fragile sort of robustness” (they call this Highly Optimized Tolerance). This is an instance of a more generic strategy – to derive control strategies as the equilibria of dynamic games against nature.

The above arguments still need some more fleshing out but they represent a skeleton for an approach to planning and control strategy design that goes beyond many standard machine learning based methods by asking some under-appreciated questions about strategy design – questions that receive very little coverage in the function-approximation dominated ML setting, but are nonetheless crucial for the achievement of the stated goals.