Two talks

I will be giving two talks, to a broad audience, on Thursday, 19th November.

The first of these is in a interdisciplinary workshop on Embodied Mind, Embodied Design, aimed at connecting researchers within the university with a shared interest in this general direction. The partial lineup of speakers looks interesting, ranging from psychology and health science to music. Those in Edinburgh University might look into this event happening in Room G32, 7 George Square.

The second is to the Edinburgh U3A Science group. I will be speaking to this group about Interactively Intelligent Robots: Prospects and Challenges.



My former student, Alesis Novik, is the cofounder of a start-up company AimBrain, who have a product which is a biometric security layer that can be used with any mobile data-sensitive application. They are actively fundraising, and winning targeted funding competitions, e.g.,

These are still early days but I am quite curious about the potential of this general line of research – using personalised attributes (the ‘biometrics’) as a stable source of identity. It would certainly begin to solve the annoying problem of passwords, but does it also have the potential in the longer term to genuinely act as the security side of ‘connected things’?

Rule-bound robots and reckless humans

I found this article, and the associated discussions about what exactly is needed for a useful level of autonomy to be really interesting:

A point that immediately stands out is this: “Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.” Roboticists should of course realise that this is the real and complete problem – we can’t just complain about problem humans who do not ‘behave by the book’ – that is exactly the wrong way to approach the design of a usable product! Instead, we need to focus on how to make the autonomous system capable enough to learn and reason about the world – including other agents – despite their idiosyncrasies and irrationality! This really is the difference between the rote precision of old and genuinely robust autonomy of the future.

In our own small way, we have been approaching such issues with projects such as the following:

If you are a UK student looking to work on a PhD project in this area, look into this studentship opening:

Belief and Truth in Hypothesised Behaviours

My PhD student, Stefano Albrecht, will have his viva voce examination this Wednesday. As is the convention in some parts of our School, he will give a pre-viva talk at IF 2.33 between 10 – 11 am on Wednesday, 19th August.

His talk abstract: This thesis is concerned with a specific class of multiagent interaction problems, called ‘ad hoc coordination problems’, wherein the goal is to design an autonomous agent which can achieve flexible and efficient interaction with other agents whose behaviours are unknown. This problem is relevant for a number of applications, such as adaptive user interfaces, electronic trading markets, and robotic elderly care. A useful method of interaction in such problems is to hypothesise a set of possible behaviours, or ‘types’, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions. We investigate the potential and limitations of this method in the context of ad hoc coordination, by addressing a spectrum of questions pertaining to the evolution and impact of beliefs as well as the implications and detection of incorrect hypothesised types. Specifically, how can evidence (i.e. observed actions) be incorporated into beliefs and under what conditions will the resulting beliefs be correct? What impact do prior beliefs (before observing any actions) have on our ability to maximise payoffs in the long-term and can they be computed automatically? Furthermore, what relation must the hypothesised types have to the true types in order for us to solve our task optimally, despite inaccuracies in hypothesised types? Finally, how can we ascertain the correctness of hypothesised types during the interaction, without knowledge of the true types? The talk will conclude with interesting open questions and future work.

While his thesis will become available in due course, you can get an idea of the main argument in this submission to the AI Journal:, entitled Belief and Truth in Hypothesised Behaviours.

Learning to be thick skinned!

The following anecdote came in a posting to one of the mailing lists I subscribe to, on decision theory. The message of course is quite domain independent, and in many ways transcends time too!

On Christmas Eve 1874, Tchaikovsky brought the score of his Piano Concerto no. 1 to the renowned pianist and conductor and the founder of the Moscow Conservatory, Nikolai Rubinstein, for advice on how to make the solo part more effective. This is how Tchaikovsky remembers it.

“I played the first movement. Not a single word, not a single comment! … I summoned all my patience and played through the end. Still silence. I stood up and asked, ‘well?’’’

“Then a torrent poured forth from Nikolai Gregorievich’s mouth… My concerto, it turned out, was worthless and unplayable – passages so fragmented, so clumsy, so badly written as to be beyond rescue – the music itself was bad, vulgar – here and there I had stolen from other composers – only two or three pages were worth preserving – the rest must be thrown out or completely rewritten…”

‘I shall not alter a single note’ I replied, I shall publish the work exactly as it stands!’ And this I did.”

The moral of the story: If you believe in the merits your work, don’t let a bad referee report get you down. Listen to Tchaikovsky’s Piano Concerto no. 1 to lift your spirit and move on.