Project COGLE, within the DARPA XAI Programme

We have been awarded one of the projects under the DARPA Explainable AI programme, to be kicked off next week. Our project, entitled COGLE (Common Ground Learning and Explanation), will be coordinated by Xerox Palo Alto Research Centre, and I will a PI leading technical efforts on the machine learning side of the architecture.

COGLE will be a highly interactive sense-making system for explaining the learned performance capabilities of an autonomous system and the history that produced that learning. COGLE will be initially developed using an autonomous Unmanned Aircraft System (UAS) test bed that uses reinforcement learning (RL) to improve its performance. COGLE will support user sensemaking of autonomous system decisions, enable users to understand autonomous system strengths and weaknesses, convey an understanding of how the system will behave in the future, and provide ways for the user to improve the UAS’s performance.

To do this, COGLE will:

  1. Provide specific interactions in sensemaking user interfaces that directly support modes of human explanation known to be effective and efficient in human learning and understanding.
  2. Support mapping (grounding) of human conceptualizations onto the RL representations and processes.

This area is becoming one that is increasingly being discussed in the public sphere, in the context of the increasing adoption of AI into daily lives, e.g., see this article in the MIT Technology Review and this one in Nautilus, both referring directly to this DARPA programme. I look forward to contributing to this theme!

Advertisements

The program induction route to explainability and safety in autonomous systems

I will be giving a talk, as part of the IPAB seminar series, through which I will try to further develop this framing of the problem which I expect our group to try to solve in the medium term. In a certain sense, my case will hark back to fairly well established techniques familiar to engineers but slowly lost with the coming of statistical methods into the robotics space. Also, many others are picking up on the underlying needs, e.g., this recent article in the MIT Technology Review gives a popular account of current sentiment among some in the AI community.

Title:

The program induction route to explainability and safety in autonomous systems

Abstract:

The confluence of advances in diverse areas including machine learning, large scale computing and reliable commoditised hardware have brought autonomous robots to the point where they are poised to be genuinely a part of our daily lives. Some of the application areas where this seems most imminent, e.g., autonomous vehicles, also bring with them stringent requirements regarding safety, explainability and trustworthiness. These needs seem to be at odds with the ways in which recent successes have been achieved, e.g., with end-to-end learning. In this talk, I will try to make a case for an approach to bridging this gap, through the use of programmatic representations that intermediate between opaque but efficient learning methods and other techniques for reasoning that benefit from ’symbolic’ representations.

I will begin by framing the overall problem, drawing on some of the motivations of the DARPA Explainable AI programme (under the auspices of which we will be starting a new project shortly) and on extant ideas regarding safety and dynamical properties in the control theorists’ toolbox – also noting where new techniques have given rise to new demands.

Then, I will shift focus to results from one specific project, for Grounding and Learning Instances through Demonstration and Eye tracking (GLIDE), which serves as an illustration of the starting point from which we will proceed within the DARPA project. The problem here is to learn the mapping between abstract plan symbols and their physical instances in the environment, i.e., physical symbol grounding, starting from cross-modal input provides the combination of high- level task descriptions (e.g., from a natural language instruction) and a detailed video or joint angles signal. This problem is formulated in terms of a probabilistic generative model and addressed using an algorithm for computationally feasible inference to associate traces of task demonstration to a sequence of fixations which we call fixation programs.

I will conclude with some remarks regarding ongoing work that explicitly addresses the task of learning structured programs, and using them for reasoning about risk analysis, exploration and other forms of introspection.

Team Edina selected to Compete for Alexa Prize

My student, Emmanuel Kahembwe, is part of Team Edina – consisting of students and postdoctoral researchers from the School of Informatics at the University of Edinburgh – who are one of 12 teams competing for The Alexa Prize. The grand challenge is to build a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes.

Let us wish them all the best and I am very curious to see what comes out of this competition!

“Beauty” as a Search Heuristic?

Through my colleague, Prof. Andrew Ranicki, I came upon this interesting interview with another distinguished colleague, Sir Michael Atiyah: https://www.quantamagazine.org/20160303-michael-atiyahs-mathematical-dreams/. The interview contains interesting reflection upon many things including the notion of beauty in mathematics. Indeed, Atiyah has co-authored a paper based on a very interesting neuroscience study on neural correlates of beauty: http://journal.frontiersin.org/article/10.3389/fnhum.2014.00068.

The key conclusion of this paper is that,

… the experience of mathematical beauty correlates parametrically with activity in the same part of the emotional brain, namely field A1 of the medial orbito-frontal cortex (mOFC), as the experience of beauty derived from other sources.

This in itself is cool, but it made me wonder – if we have a parametric signal in our brains, after a certain level of expertise has already been acquired, which correlates with how ‘beautiful’ something is, then is this useful as a heuristic in the search for new such objects? One of the limitations of most computational intelligent systems is precisely that they perform so poorly when it comes to exploratory search in open ended domains. Does this signal then help us prune search in our heads, a bit along the lines of José Raúl Capablanca’s famous, “I see only one move ahead, but it is always the correct one”?

New JAIR paper: Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks

Our paper proposes a new way to account for “passivity” structure in Dynamic Bayesian Networks, which enables more efficient belief computations and through that improvements for systems modelled by POMDPs and so on. It was surprising to me when we started this project that despite significant earlier attention to exploiting conditional independence structure, there had not been work on these notions of using constraints (often imposed by physics of other background regularities) in making belief updates more efficient.

Please read the paper in JAIR: http://dx.doi.org/10.1613/jair.5044, abstract reproduced below:

Stefano V. Albrecht and Subramanian Ramamoorthy (2016) “Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks”, Volume 55, pages 1135-1178.

Dynamic Bayesian networks (DBNs) are a general model for stochastic processes with partially observed states. Belief filtering in DBNs is the task of inferring the belief state (i.e. the probability distribution over process states) based on incomplete and noisy observations. This can be a hard problem in complex processes with large state spaces. In this article, we explore the idea of accelerating the filtering task by automatically exploiting causality in the process. We consider a specific type of causal relation, called passivity, which pertains to how state variables cause changes in other variables. We present the Passivity-based Selective Belief Filtering (PSBF) method, which maintains a factored belief representation and exploits passivity to perform selective updates over the belief factors. PSBF produces exact belief states under certain assumptions and approximate belief states otherwise, where the approximation error is bounded by the degree of uncertainty in the process. We show empirically, in synthetic processes with varying sizes and degrees of passivity, that PSBF is faster than several alternative methods while achieving competitive accuracy. Furthermore, we demonstrate how passivity occurs naturally in a complex system such as a multi-robot warehouse, and how PSBF can exploit this to accelerate the filtering task.

New AIJ paper: Belief and Truth in Hypothesised Behaviours

Our work on a new model and algorithm for ad hoc coordination, based on Stefano Albrecht’s PhD thesis, has now appeared in the Artificial Intelligence Journal. As part of the publication process, we have made an easily digestible AudioSlides summary that goes with the paper:
http://audioslides.elsevier.com//ViewerLarge.aspx?source=1&doi=10.1016/j.artint.2016.02.004

Yet Another Autonomous Fender Bender

So, the Google car has now been in an accident that was clearly its fault. For those who have not yet heard about this, see, e.g., http://spectrum.ieee.org/cars-that-think/transportation/self-driving/google-car-may-actually-have-helped-cause-an-accident. Now, it is unfair to criticise the car on its numerical record – it has clearly covered enormous ground with no serious faults so far. Kudos!!

However, the record has been achieved by being careful and conservative. The real issue arises in a scenario which, in the words of the manager of the project, “is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.” So, the real question is when and how exactly these kinds of predictions will get integrated.

By the time the Google car really does leave the sunny confines of California and go to places where the driving is much more adventurous and the very rules of the road are much of a negotiated truce, the numbers will rise from a single digit number of near-misses to a much more routine occurrence without the capacity for this car to reason explicitly about other drivers and their strategies.

This is a question we worry about quite a bit in my group, ranging from lab-scale human subject experiments, e.g.,

https://www.youtube.com/watch?v=breBAyXkVhc

to theoretical ideas for how best to achieve such interaction, e.g., http://arxiv.org/abs/1507.07688 (official version: doi:10.1016/j.artint.2016.02.004). We do not directly have the opportunity to contribute to what is running in the car, but I do hope such ideas make their way into it, to fully realise the potential of this fascinating new technology!