Intention prediction among goalkeepers

How is it that goal keepers manage to ever save any penalty shots, beating the striker in the incredibly little time available?

This brief video in the online version of The Economist outlines a few quite different attributes to their thought process, not to mention the not so explicitly conscious reflexes. Even at this cursory level, it is interesting how many different modalities are involved – learning from the striker’s historical kicks, randomised strategies in the spirit of game theory, face processing to extract subtle cues, psychological intimidation, etc.

My student, Aris Valtazanos, and I wondered about this problem in one of our papers associated with our football playing robots, but clearly we are unable to capture this whole variety of interactive intelligence. It would be cool when one day we have agents that can actually function at this level!

 

Chess Metaphors

Chess Metaphors

This is a nice book review written by Garry Kasparov, insightful in part because of the major role he played in the area. Some interesting snippets below.

On human-computer team play:

The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

On the methodology of chess agents:

Like so much else in our technology-rich and innovation-poor modern world, chess computing has fallen prey to incrementalism and the demands of the market. Brute-force programs play the best chess, so why bother with anything else? Why waste time and money experimenting with new and innovative ideas when we already know what works? Such thinking should horrify anyone worthy of the name of scientist, but it seems, tragically, to be the norm.

 

Eugene Goostman and the Turing Test

I just heard about this program that participated in one of the conversational Turing Test competitions, through this article in The New Yorker by Gary Marcus.

Nobody who seriously works on any aspect of AI would be genuinely surprised by this. However, it is a useful reminder of exactly ‘how much’ AI one needs in any practical application. There remain many hard problems, e.g., natural language understanding at a human-competitive level. However, there are many applications where the bar is actually really low, e.g., as Gary Marcus notes in the above article,

If Goostman can fool a third of its judges, the creation of convincing computer-based characters in interactive games—the next generation of Choose Your Own Adventure storytelling—may be a lot easier than anyone realized.

Over the years, I have been surprised, and a tad disappointed, at how many applications that could be so stimulating for AI research are actually cracked by simplistic and naive methods, with just a bit of clever wrapping (like the game AI comment above). I wonder if there are any genuinely intermediate level problems – something not as trivially solvable by naive methods (e.g., the game AI in many apps today), yet not as steep as ‘full’ NLU?

The Longitude Prize

The Longitude Prize is an interesting concept. Way back in 1714, it was an award for the person(s) who best solved the difficult problem of determining the ship’s longitude – in that day and age, a real grand challenge, one that literally determined the safety and lives of plenty of people!

Today, the problems of clocks and chronometers may seem quaint, but the notion of defining some big problems that need solving remains interesting. All the more so, in my local context, given the short-termism of so many sources of funding that are realistically available to researchers.

So, the announcement today that there is a new version of this prize is interesting indeed. They are all big societal issues, but one of them stands out in particular as having something to do with problems I am professionally interested in. There is a challenge associated with Dementia – How can we help people with dementia live independently for longer?

Assisted living advocates, including within the robotics and AI communities have built systems associated with this problem before, but can all that be lifted up to the standards of the longitude prize? What are the substantial questions that still remain un answered in this area? Useful things to ponder…

Lighthill on AI, and some questions that remain open!

As an AI researcher, and a faculty member at Edinburgh where some of the most interesting episodes in the history of AI were enacted, I find the Lighthill report to be quite fascinating and I have occasionally come back to reading it when thinking about the big questions.

One interesting point is that many of the things he criticised so strongly, e.g., machine translation, have turned out to be the areas where in subsequent decades AI has made the most progress. Some of his other critiques, such as his scepticism about data-driven approaches (spoken like the applied mathematician he was), have turned out exactly the other way – as hallmarks of a methodology that has come to define our age.

There is one observation, however, that he rightly makes, that continues to remain a blind spot for the research community:

We must remember, rather, that the intelligent problem solving and eye-hand co-ordination and scene analysis capabilities that are much studied in category B represent only a small part of the features of the human CNS that give the human race its uniqueness. It is a truism that human beings who are very strong intellectually but weak in emotional drives and emotional relationships are singularly ineffective in the world at large. Valuable results flow from the integration of intellectual ability with the capacity to feel and to relate to other people; until this integration happens problem solving is no good because there is no way of seeing which are the right problems. These remarks have been included to make clear that the over-optimistic category-B-centred view of AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.

He is right, and we still don’t pay nearly enough attention to this. Perhaps it is time, especially given the remarkable new opportunities created by new advances in allied areas ranging from experimental neurosciences to cognitive psychology?