Can we take games seriously?

It has been a little while since I posted here. I’ve been busy with a whole variety of things – most recently, I was at RoboCup in Graz, Austria. My student Ioannis Havoutis was presenting a paper at the symposium, but that seemed like a pretext for us to visit the event and get a sense for whether we could enter a team in the next year or so.

I was quite impressed by what I saw at RoboCup. This is a many-hued event in terms of the kinds of skills displayed by robots. The competitions range from the 2D simulation league, which is all about strategic issues involved in the interaction of teams of agents, to the standard platform league which involves real physical humanoid robots that must come to terms with all aspects of perception, action and behaviour. It is safe to say that the SPL is still quite far from exploring team-level strategic issues in the same way as the ‘simpler’ leagues.

People often ask why one should expend this much effort into what seems like a fairly impractical affair. In practice, a good RoboCup team involves a minimum of half-dozen bright students and a faculty member or two. These people have spent many man-months putting together the autonomous systems. Why? Some of my more theoretically inclined colleagues often pooh-pooh such efforts and wonder if their time couldn’t be better spent.

Of course, like many other AI researchers, I disagree. I do see some value in building autonomous agents that are targetted at ambitious goals that sometimes seem overly so. For instance, the process of building an autonomous agent that can perform the complete stack of tasks ranging from low-level perception to higher-level world or opponent modelling and finally strategy synthesis, fully autonomously, is undoubtedly and intricately related to the goal of understanding what it means to build a human-competitive intelligent system.

I often hear a counterargument to this latter point, again from fairly theoretically minded colleagues, that these kind of games do not really represent intelligence in the same way as some other ‘hard’ problems do. I have a number of my own counterpoints but it is perhaps not worth rehashing such things here. Suffice it to say (for the record) that I do highly value theoretical ideas and conceptual innovation but simply fail to see the disconnect between them and the challenging practical problems under discussion. In this context, I was pleasantly surprised and pleased to see this excellent post by the eminent mathematician Tim Gowers about our difficulty in modelling such games and the strategic choices therein. If you read it carefully, you notice that a lot of the thorny assumptions and simplifications are related to the question of what precisely is the game that is being played. Coming to terms with that, in a principled and general way, is at the heart of what separates AI from other ‘traditional’ problem solving methodologies – it is also the reason why learning (albeit not in the narrow sense that the term is sometimes used these days) plays a central role in these problems. Gowers ends his post with the comment, “I do not for one moment think that anything in this post, even when fully developed, would be of the slightest use to a tennis player“. He may be right about human players but answering these questions in a principled way and developing algorithmic procedures for learning such strategies from experience could be of great value to robotic players who may someday go on to help us understand what the humans are doing, and that would be a scientific breakthrough of the highest calibre!!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s