Team Edina selected to Compete for Alexa Prize

My student, Emmanuel Kahembwe, is part of Team Edina – consisting of students and postdoctoral researchers from the School of Informatics at the University of Edinburgh – who are one of 12 teams competing for The Alexa Prize. The grand challenge is to build a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes.

Let us wish them all the best and I am very curious to see what comes out of this competition!

Advertisements

New AIJ paper: Belief and Truth in Hypothesised Behaviours

Our work on a new model and algorithm for ad hoc coordination, based on Stefano Albrecht’s PhD thesis, has now appeared in the Artificial Intelligence Journal. As part of the publication process, we have made an easily digestible AudioSlides summary that goes with the paper:
http://audioslides.elsevier.com//ViewerLarge.aspx?source=1&doi=10.1016/j.artint.2016.02.004

Yet Another Autonomous Fender Bender

So, the Google car has now been in an accident that was clearly its fault. For those who have not yet heard about this, see, e.g., http://spectrum.ieee.org/cars-that-think/transportation/self-driving/google-car-may-actually-have-helped-cause-an-accident. Now, it is unfair to criticise the car on its numerical record – it has clearly covered enormous ground with no serious faults so far. Kudos!!

However, the record has been achieved by being careful and conservative. The real issue arises in a scenario which, in the words of the manager of the project, “is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.” So, the real question is when and how exactly these kinds of predictions will get integrated.

By the time the Google car really does leave the sunny confines of California and go to places where the driving is much more adventurous and the very rules of the road are much of a negotiated truce, the numbers will rise from a single digit number of near-misses to a much more routine occurrence without the capacity for this car to reason explicitly about other drivers and their strategies.

This is a question we worry about quite a bit in my group, ranging from lab-scale human subject experiments, e.g.,

https://www.youtube.com/watch?v=breBAyXkVhc

to theoretical ideas for how best to achieve such interaction, e.g., http://arxiv.org/abs/1507.07688 (official version: doi:10.1016/j.artint.2016.02.004). We do not directly have the opportunity to contribute to what is running in the car, but I do hope such ideas make their way into it, to fully realise the potential of this fascinating new technology!

Are you doing what I think you are doing?

This is the title of a paper by my student, Stefano Albrecht, which we have recently submitted to a conference. The core idea is to address model criticism, as opposed to the better studied concept of model selection, within the multi agent learning domain.

For Informatics folks, he is giving a short talk on this paper on Friday at noon, to the Agents group (in IF 2.33). The abstract is below.

 The key for effective interaction in many multi-agent applications is to reason explicitly about the behaviour of other agents, in the form of a hypothesised behaviour. While there exist several methods for the construction of a behavioural hypothesis, there is currently no universal theory which would allow an agent to contemplate the correctness of a hypothesis. In this work, we present an novel algorithm which decides this question in the form of a frequentist hypothesis test. The algorithm allows for multiple metrics in the construction of the test statistic and learns its distribution during the interaction process, with asymptotic correctness guarantees. We present results from a comprehensive set of experiments, demonstrating that the algorithm achieves high accuracy and scalability at low computational costs.

Intention prediction among goalkeepers

How is it that goal keepers manage to ever save any penalty shots, beating the striker in the incredibly little time available?

This brief video in the online version of The Economist outlines a few quite different attributes to their thought process, not to mention the not so explicitly conscious reflexes. Even at this cursory level, it is interesting how many different modalities are involved – learning from the striker’s historical kicks, randomised strategies in the spirit of game theory, face processing to extract subtle cues, psychological intimidation, etc.

My student, Aris Valtazanos, and I wondered about this problem in one of our papers associated with our football playing robots, but clearly we are unable to capture this whole variety of interactive intelligence. It would be cool when one day we have agents that can actually function at this level!