New JAIR paper: Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks

Our paper proposes a new way to account for “passivity” structure in Dynamic Bayesian Networks, which enables more efficient belief computations and through that improvements for systems modelled by POMDPs and so on. It was surprising to me when we started this project that despite significant earlier attention to exploiting conditional independence structure, there had not been work on these notions of using constraints (often imposed by physics of other background regularities) in making belief updates more efficient.

Please read the paper in JAIR: http://dx.doi.org/10.1613/jair.5044, abstract reproduced below:

Stefano V. Albrecht and Subramanian Ramamoorthy (2016) “Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian Networks”, Volume 55, pages 1135-1178.

Dynamic Bayesian networks (DBNs) are a general model for stochastic processes with partially observed states. Belief filtering in DBNs is the task of inferring the belief state (i.e. the probability distribution over process states) based on incomplete and noisy observations. This can be a hard problem in complex processes with large state spaces. In this article, we explore the idea of accelerating the filtering task by automatically exploiting causality in the process. We consider a specific type of causal relation, called passivity, which pertains to how state variables cause changes in other variables. We present the Passivity-based Selective Belief Filtering (PSBF) method, which maintains a factored belief representation and exploits passivity to perform selective updates over the belief factors. PSBF produces exact belief states under certain assumptions and approximate belief states otherwise, where the approximation error is bounded by the degree of uncertainty in the process. We show empirically, in synthetic processes with varying sizes and degrees of passivity, that PSBF is faster than several alternative methods while achieving competitive accuracy. Furthermore, we demonstrate how passivity occurs naturally in a complex system such as a multi-robot warehouse, and how PSBF can exploit this to accelerate the filtering task.

New AIJ paper: Belief and Truth in Hypothesised Behaviours

Our work on a new model and algorithm for ad hoc coordination, based on Stefano Albrecht’s PhD thesis, has now appeared in the Artificial Intelligence Journal. As part of the publication process, we have made an easily digestible AudioSlides summary that goes with the paper:
http://audioslides.elsevier.com//ViewerLarge.aspx?source=1&doi=10.1016/j.artint.2016.02.004

Yet Another Autonomous Fender Bender

So, the Google car has now been in an accident that was clearly its fault. For those who have not yet heard about this, see, e.g., http://spectrum.ieee.org/cars-that-think/transportation/self-driving/google-car-may-actually-have-helped-cause-an-accident. Now, it is unfair to criticise the car on its numerical record – it has clearly covered enormous ground with no serious faults so far. Kudos!!

However, the record has been achieved by being careful and conservative. The real issue arises in a scenario which, in the words of the manager of the project, “is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.” So, the real question is when and how exactly these kinds of predictions will get integrated.

By the time the Google car really does leave the sunny confines of California and go to places where the driving is much more adventurous and the very rules of the road are much of a negotiated truce, the numbers will rise from a single digit number of near-misses to a much more routine occurrence without the capacity for this car to reason explicitly about other drivers and their strategies.

This is a question we worry about quite a bit in my group, ranging from lab-scale human subject experiments, e.g.,

https://www.youtube.com/watch?v=breBAyXkVhc

to theoretical ideas for how best to achieve such interaction, e.g., http://arxiv.org/abs/1507.07688 (official version: doi:10.1016/j.artint.2016.02.004). We do not directly have the opportunity to contribute to what is running in the car, but I do hope such ideas make their way into it, to fully realise the potential of this fascinating new technology!

Is Artificial Intelligence Safe for Humanity?

This is the title of a feature article written by RAD Alumnus, Stefano Albrecht, in the recent issue of EUSci magazine (see pp. 22):

The question is of course much talked about, which several high profile advocates answering yes and no. For me, the most interesting outcome of all these debates is the observation that there is the need for focus “not only on making AI more capable, but also on maximizing the societal benefit of AI” (words taken from an Open Letter drafted by the Future of Life Institute, and signed by many AI researchers including yours truly). From a scientific perspective, even clearly distinguishing between these two objectives is something of an open issue!

Two talks

I will be giving two talks, to a broad audience, on Thursday, 19th November.

The first of these is in a interdisciplinary workshop on Embodied Mind, Embodied Design, aimed at connecting researchers within the university with a shared interest in this general direction. The partial lineup of speakers looks interesting, ranging from psychology and health science to music. Those in Edinburgh University might look into this event happening in Room G32, 7 George Square.

The second is to the Edinburgh U3A Science group. I will be speaking to this group about Interactively Intelligent Robots: Prospects and Challenges.

 

AimBrain

My former student, Alesis Novik, is the cofounder of a start-up company AimBrain, who have a product which is a biometric security layer that can be used with any mobile data-sensitive application. They are actively fundraising, and winning targeted funding competitions, e.g., http://www.ed.ac.uk/informatics/news-events/recentnews/alumni-in-ubs-finals.

These are still early days but I am quite curious about the potential of this general line of research – using personalised attributes (the ‘biometrics’) as a stable source of identity. It would certainly begin to solve the annoying problem of passwords, but does it also have the potential in the longer term to genuinely act as the security side of ‘connected things’?

Rule-bound robots and reckless humans

I found this article, and the associated discussions about what exactly is needed for a useful level of autonomy to be really interesting: http://nyti.ms/1LRy9MF.

A point that immediately stands out is this: “Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.” Roboticists should of course realise that this is the real and complete problem – we can’t just complain about problem humans who do not ‘behave by the book’ – that is exactly the wrong way to approach the design of a usable product! Instead, we need to focus on how to make the autonomous system capable enough to learn and reason about the world – including other agents – despite their idiosyncrasies and irrationality! This really is the difference between the rote precision of old and genuinely robust autonomy of the future.

In our own small way, we have been approaching such issues with projects such as the following:

If you are a UK student looking to work on a PhD project in this area, look into this studentship opening: http://www.edinburgh-robotics.org/vacancy/studentship/571.