Is Artificial Intelligence Safe for Humanity?

This is the title of a feature article written by RAD Alumnus, Stefano Albrecht, in the recent issue of EUSci magazine (see pp. 22):

The question is of course much talked about, which several high profile advocates answering yes and no. For me, the most interesting outcome of all these debates is the observation that there is the need for focus “not only on making AI more capable, but also on maximizing the societal benefit of AI” (words taken from an Open Letter drafted by the Future of Life Institute, and signed by many AI researchers including yours truly). From a scientific perspective, even clearly distinguishing between these two objectives is something of an open issue!

Understanding diversity

A theme that is increasingly gaining traction among designers of autonomous agents of various kinds is the idea that long term robust behavior requires a bank of diverse strategies, marshaled in any given instance to produce the desired decision or plan. So, if you are trying to build an autonomous robot, worry less about getting individual local behaviors perfected and focus more on the large scale structure of all the different tasks you expect to encounter, and how you will exploit properties of this global structure. If you are a trading agent, perhaps a bank of low complexity rules put together can do better (in the sense of robustness) than a more fancy but delicate model.

Recently, someone pointed me to work by the economist, Scott Page, on this theme but in a much broader context. Here is an NYT interview summarizing his view: http://www.nytimes.com/2008/01/08/science/08conv.html. He has also written books on the topic, e.g., Diversity and Complexity, Princeton University Press 2010. Interesting stuff.

Extended uncertainty

I have been following Rick Bookstaber’s blog and there is a very interesting discussion there about robustness in the face of large scale uncertainty. As a conceptual issue, this has been on my mind for a while. In fact, this issue was one of the motivations behind some aspects of my doctoral thesis and I am actively trying to address this question in the context of my robotics and autonomous agents work. All this has made me realize that this is one of the big hard problems (however, somewhat under-appreciated) within the science of decision processes.

Bookstaber’s take on this issue is well put in this paper in the Journal of Theoretical Biology. In a nutshell, he argues that if an animal has to make decisions in a world where it can observe some states, s, but not some other states, t, (whose evolution it simply can not model, i.e., the concept of extended uncertainty) then the process of making an optimal decision in this setting would imply “averaging” over the extended uncertainty in a proper decision-theoretic sense. The main implication of this process is that the animal will adopt a set of coarse behaviours that will be sub-optimal with respect to many observable features in the restricted state space, s. There is a lot more detail in that paper, mainly focussed on theoretical biology issues of animal behaviour.

I find this particularly interesting because I too had been approaching the problem of learning robust behaviours by assuming that there are coarse behaviours (however, defined differently from how Bookstaber does it) for most tasks of real interest, such that fine behaviours with respect to specific quantitative optimality criteria  are specializations of these coarse behaviours. Two questions arise within this program. How will you find concise descriptions of these coarse behaviours while learning from experience in a complex world? Many learning techniques, such as reinforcement learning, run into deep computational difficulties in this hierarchical setting. However, this is a well posed technical question already being addressed by many people, and I am actively working on as well. A somewhat more foundational conceptual question is – why do you think these types of coarse behaviours will exist in general? I am a control theorist who would ideally like to find solutions for these learning problems that are indepdent of the special domains (e.g., is it clear that coarse behaviours would exist in arbitrary decision processes such as, say, in autonomous trading), so this generality question is of interest to me. Bookstaber’s paper points towards the answer in a general decision-theoretic sense.

In an earlier post, I tried to make an argument that the point of “intelligence” is to be able to act robustly in an uncertain world. Now we see that optimal decisions in the face of extended uncertainty implies the existence of coarse behaviours for many common decision tasks. So, perhaps, an agent who is learning a complex behaviour in an uncertain world is  better off structuring this process in a similar multi-level way…

Bailouts… theory and practice.

I just visited Ben Bernanke’s academic website at Princeton, after H sent me a link to an exchange between him and Ron Paul.

On the website, he has a paper entitled “Should Central Banks Respond to Movements in Asset Prices?”, which argues that “stochastic simulations show little gain from an independent response to stock prices by the central bank”. More precisely:

… inflation targeting has generally performed well in practice. However, so far this approach has not often been stress-tested by large swings in asset prices. Our earlier research employed simulations of a small, calibrated macro model to examine how an inflation-targeting policy, defined as one in which the central bank’s instrument interest rate responds primarily to changes in expected inflation, might fare in the face of a boom-and-bust cycle in asset prices. We found that an aggressive inflation-targeting policy rule (in our simulations, one in which the coefficient relating the instrument interest rate to expected inflation is 2.0) substantially stabilizes both output and inflation in scenarios in which a bubble in stock prices develops and then collapses, as well as in scenarios in which technology shocks drive stock prices.

… In the spirit of recent work on robust control, the exercises in our earlier paper analyzed the performance of policy rules in worst-case scenarios, rather than on average.  However, the more conventional approach to policy evaluation is to assess the expected loss for alternative policy rules with respect to the entire probability distribution of economic shocks, not just the most unfavorable outcomes.  That is the approach taken in the present article.


The past few weeks must surely count as the most unfavorable outcome mentioned above. I don’t mean to cast aspersions on the ability of the people concerned, but this just underlines how little we understand about the concept of robustness and what to do about it. For instance, Mandelbrot has spent a lifetime exhorting people to believe in the fat tails, and I think people do, but we still lack a firm footing for getting simulations of such things to mirror the realities that matter!

Indeed, we live in most interesting times!

Scalable versions of differential game theory

Dr. Benjamin Mann, a program manager at DARPA, outlines 23 mathematical challenges for this century. Many of them are fairly well known problems that would appear in any such list. One challenge question that may not have been so easy to guess (although I fully support its selection) is “What new scalable mathematics is needed to replace the traditional PDE approach to differential games?“. There is also a passing mention of the importance of repeated games in Steve Smale’s list, under the heading of limits of intelligence – which I think is appropriate because whatever innovations come out of solving the generalized differential games challenge will also have a lot to do with the problem of creating an intelligent autonomous system.

Although they may sound quaint and obscure, differential games have a lot to do with many problems of current interest. One of the holy grails of robotics is an autonomous system that can reliably perform complex dynamical tasks in a hostile environment – a game against nature involving dynamics. At a conceptual level, we do understand many things about how to achieve this goal – the theory and practice of reinforcement learning (which, at the core, utilizes a stochastic and data-driven variant of the classical Hamilton-Jacobi-Bellman PDE) being a good example. However, the limitations of these general approaches have a lot to do with the limitations that Dr. Mann refers to – the computational complexity is unbearable in realistic settings. The best methods for the existing PDE based approaches to differential games involve innovative numerical techniques – multi-resolution grids, fast marching methods, etc. Still, these are all rather prohibitive for real time, high dimensional applications.

A direction that I find promising (as I have mentioned before) is that of using topological ideas to focus the algorithms, e.g., as in the SToMP project. There are some useful nuggets in recent publications by researchers who are part of this project. I am sufficiently intrigued that I am going to work on some aspects of this general problem myself. I hope to report back with more concrete and interesting results one of these days…

What might give rise to the cumbersome faculty we call intelligence?

Last week, I gave a talk as part of the Neuroinformatics research course which is aimed at incoming graduate students who are figuring out what to do. As this was a 2-hour talk that allowed me to slowly introduce essential ideas, I decided to spend some time on some conceptual issues before jumping into technical stuff. One such thread that finally came together (after refinements based on two earlier talks) was the following.

Why would an animal or robot need intelligence? The standard answer is that intelligence goes with autonomous behavior. However, even some fairly dumb systems can be “autonomous” – my colleague Michael uses the example of a minimalist robot that ‘survived’ in the wild for a year or two because it was nothing more than a motor attached to a solar panel! Well, that’s pathological! So, I present the example of the fly (for concreteness, consider the fruitfly – drosophila melanogaster, one of the most studied organisms in the world). It has a number of hard-wired – by evolution – neural circuits that allow it to behave in interesting ways. For instance, its visual sense of predator is based on a simple calculation of expansion in its optic field. This signal travels through a neural pathway to the halteres (a vestigial wing like organ) which are very good direction sensors. This then feeds directly into phase shifts in the wing – which otherwise rotates as a simple oscillator. So, as soon as a predator is sighted, a saccade is initiated and then the halteres are used to correct direction and complete the turn. The combination of all this is that you get a very agile predator-avoidance mechanism. And, in a sense, there is not much “intelligence” here – if by that term we mean something beyond standard closed-loop control.

I claim that the reason an animal would bother to adopt/evolve real “intelligence” is:

(a) to be able to go beyond hard-wired or slowly evolved morphological adaptations

(b) to achieve a combination of robustness, flexibility and efficiency. Robustness refers to the ability to function in a variety of environments. What does it mean to “function” – this is where flexibility comes in. I am using that phrase to mean the ability to carry out spatio-temporally extended behaviors (or, if you like, some sort of “complex dynamics” that can have many nontrivial variations). And finally, energy is a limited resource, so animals and robots need to conserve this.

Clearly, many behaviors admit simple explanations like in the case of the fly. However, equally clearly, I don’t have special circuitry to enable me to play football or the cello. I learn these, largely artificial, behaviors. And I learn to safely and reliably perform them under a variety of environmental conditions. And I eventually learn numerous variations on these behaviors, some of which are highly efficient with respect to energy, attention or other aesthetic criteria.

The ability to do this is why an animal, or a robot, needs intelligence. As a related issue (although one I will not dwell on today), this suggests a continuum from dumb feedback loops to smart problem solvers – so that, I claim, there need not be any essential magic in the ability to deal with more abstract problems. On a second related note, a student recently pointed me to an interesting hypothesis – that the cerebellum first evolved as a state estimator for computing predator/prey locations and figuring out how to use the body dynamics.

The next, somewhat contentious issue, is that of robustness. What is this elusive quantity? As someone who is originally trained as a control theorist, I am quite used to answering this question – although the technical aspects of the answer seem to require radical improvements in order to apply to situations common in robotics or AI. In a nutshell, a behavior is robust if it persists in the face of a family or set of environmental conditions. In the simplest setting – is a control loop stable for a whole family of plants? In a more complex setting – if you had a probability distribution over environments then can you guarantee something in the face of all of those possibilities? An interesting formulation of this question is in a paper by Carlson and Doyle where they use this idea to model a spatial process (spread of forest fires) – via a variational equation that yields a configuration of trees and empty spaces that maximizes the conserved area over a space of ignition events. They use this to argue that most complex systems are only able to achieve a “fragile sort of robustness” (they call this Highly Optimized Tolerance). This is an instance of a more generic strategy – to derive control strategies as the equilibria of dynamic games against nature.

The above arguments still need some more fleshing out but they represent a skeleton for an approach to planning and control strategy design that goes beyond many standard machine learning based methods by asking some under-appreciated questions about strategy design – questions that receive very little coverage in the function-approximation dominated ML setting, but are nonetheless crucial for the achievement of the stated goals.