So, the Google car has now been in an accident that was clearly its fault. For those who have not yet heard about this, see, e.g., http://spectrum.ieee.org/cars-that-think/transportation/self-driving/google-car-may-actually-have-helped-cause-an-accident. Now, it is unfair to criticise the car on its numerical record – it has clearly covered enormous ground with no serious faults so far. Kudos!!
However, the record has been achieved by being careful and conservative. The real issue arises in a scenario which, in the words of the manager of the project, “is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision.” So, the real question is when and how exactly these kinds of predictions will get integrated.
By the time the Google car really does leave the sunny confines of California and go to places where the driving is much more adventurous and the very rules of the road are much of a negotiated truce, the numbers will rise from a single digit number of near-misses to a much more routine occurrence without the capacity for this car to reason explicitly about other drivers and their strategies.
This is a question we worry about quite a bit in my group, ranging from lab-scale human subject experiments, e.g.,
to theoretical ideas for how best to achieve such interaction, e.g., http://arxiv.org/abs/1507.07688 (official version: doi:10.1016/j.artint.2016.02.004). We do not directly have the opportunity to contribute to what is running in the car, but I do hope such ideas make their way into it, to fully realise the potential of this fascinating new technology!