I found this article, and the associated discussions about what exactly is needed for a useful level of autonomy to be really interesting: http://nyti.ms/1LRy9MF.
A point that immediately stands out is this: “Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.” Roboticists should of course realise that this is the real and complete problem – we can’t just complain about problem humans who do not ‘behave by the book’ – that is exactly the wrong way to approach the design of a usable product! Instead, we need to focus on how to make the autonomous system capable enough to learn and reason about the world – including other agents – despite their idiosyncrasies and irrationality! This really is the difference between the rote precision of old and genuinely robust autonomy of the future.
In our own small way, we have been approaching such issues with projects such as the following:
If you are a UK student looking to work on a PhD project in this area, look into this studentship opening: http://www.edinburgh-robotics.org/vacancy/studentship/571.