(Next) 100 Years of AI

There is a very nice initiative hosted at Stanford University, launching a broad based study of the progress and impact of AI: http://cacm.acm.org/news/181386-stanford-to-host-100-year-study-on-artificial-intelligence/fulltext

A somewhat fuller description of the aims and scope of this study can be found here: https://stanford.app.box.com/s/266hrhww2l3gjoy9euar

I like the fact that this list of topics is suitably expansive and forward looking. Some of these themes are quite close to home, e.g., Collaborations with machines and Psychology of humans. There is also appropriate recognition of philosophically basic issues around AI – something that is constantly being pushed into the corner by the more shortsighted views of people who seem to be driving policy in funding agencies and so on.

Sensors on surfers

This video from Red Bull Sports describes a very interesting array of technology, sensing modalities to be precise, applied to the problem of better understanding the athletes’ physiology and performance:

http://www.redbull.com/us/en/surfing/stories/1331686821979/this-is-what-happens-when-scientist-go-surfing

Jake MArshall - Action - Surf Science

© Seth de Roulet/Red Bull Content Pool

Many of these sensors are becoming increasingly common. Certainly, in robotics labs like mine, we are routinely using motion tracking technology of various kinds, connecting tracked traces to motion analysis and so on. The way the Red Bull team have used pressure sensing footwear and the way they have setup the UAV to do personal tracking is handled very nicely.

Beyond that, although we are also beginning to play with eye tracking and EEG, I have not yet seen someone apply it in such a physically demanding environment. I’d love to know more about how the technology was actually deployed – we are keen to similar things in our own applications. Of course, the video says little about what was actually obtained from the analysis, e.g., the EEG sensor is only able to get a very coarse frequency domain measurement about the alpha-waves, it appears. Are they actually able to get meaningful interpretations from this sensor? For that matter, are they able to get genuinely useful insights from the SMI eye tracker?

Cool stuff!

The deal with the devil

In many of my papers, we have been inspired by and found use for geometric methods, as have many other roboticists. One reason I like geometry is the naturalness of that language for describing properties of dynamics – again something physicists have long known and roboticists have likewise come to understand. However, a constant frustration has been the difficulty of efficiently computing with geometric objects, especially when the focus is on abstraction (e.g., global properties of manifolds) rather than, say, efficient location of points in a planar region. In our most recent paper, at R:SS 2014, we found that the computational barriers begin to come down when you bring in more tools from algebra – we used matrix computations based on an algebraic topological formulation of what we were after, and that became quite efficient.

In this context, I found the following quote very interesting:

Algebra is the offer made by the devil to the mathematician. The devil says: “I will give you this powerful machine, it will answer any question you like. All you need to do is giving me your soul: give up geometry and you will have this marvellous machine.”

- M. Atiyah, “Mathematics in the 20th century,” in Mathematical Evolutions. Providence, RI: Mathematical Association of America, 2002.

Lessons from a robotics entrepreneur

Due to a variety of activities at work, I have been following the activities of some of the alumni of Willow Garage. In part, I am curious about what they do in terms of products and technologies, in light of what they learnt from building and popularising robots like the PR2. 

Recently, I came across this post written by Steve Cousins, who is now the head of Savioke. In this post, he takes on the question about lessons learnt:

http://www.savioke.com/blog/2014/7/23/top-ten-things-i-learned-at-willow-garage

It is clearly written from the industry perspective, but I think it is highly relevant to academics too. After all, a lot of the recent attention we are getting as a field is not so much because we have created something new that simply did not exist 20 years back. Instead, it is because even some of these new technologies in our field are finally looking close to being commercially relevant.

Winograd schemas

Through some conversations at AAAI last week, I came to know about Winograd Schemas and a challenge involving their interpretation that is considered by many to be a more fitting one than the conversation bot versions of the Turing Test that have repeatedly been shown to be easy to crack without really solving the main problem. A Winograd schema is based on pairs of sentences that differ in only one or two words which make a radical difference in the meaning, thus requiring deeper understanding of ‘commonsense’ in order to resolve the meaning.

This challenge has now been picked up by Nuance, who see substantial benefits from this level of AI to their own products. This is a positive thing.

I wonder if we can also define equally crisp and well defined challenges that bring out the need for intelligence in the area of robotics. The nice thing about the Nuance Winograd challenge is that it gets at the heart of the issue in a bite-sized fashion, allowing anyone with a keen mind and some bandwidth to potentially contribute to the issue. In contrast, most robotics challenges out there today are as much a test of people’s political and management skills in assembling the resources to compete as it is a test of scientific ideas – and even then, they often seem to be set up to not really address these kinds of core AI issues. It’d be a lot of fun to find more bite-sized yet deep and significant open problems within robotics.

Intention prediction among goalkeepers

How is it that goal keepers manage to ever save any penalty shots, beating the striker in the incredibly little time available?

This brief video in the online version of The Economist outlines a few quite different attributes to their thought process, not to mention the not so explicitly conscious reflexes. Even at this cursory level, it is interesting how many different modalities are involved – learning from the striker’s historical kicks, randomised strategies in the spirit of game theory, face processing to extract subtle cues, psychological intimidation, etc.

My student, Aris Valtazanos, and I wondered about this problem in one of our papers associated with our football playing robots, but clearly we are unable to capture this whole variety of interactive intelligence. It would be cool when one day we have agents that can actually function at this level!