This is a nice book review written by Garry Kasparov, insightful in part because of the major role he played in the area. Some interesting snippets below.
On human-computer team play:
The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.
The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
On the methodology of chess agents:
Like so much else in our technology-rich and innovation-poor modern world, chess computing has fallen prey to incrementalism and the demands of the market. Brute-force programs play the best chess, so why bother with anything else? Why waste time and money experimenting with new and innovative ideas when we already know what works? Such thinking should horrify anyone worthy of the name of scientist, but it seems, tragically, to be the norm.
The Netflix Tech Blog has this very interesting piece which is very insightful: http://techblog.netflix.com/2011/01/how-we-determine-product-success.html.
In particular, this point is very important although easily and often very voluntarily ignored by so many researchers:
There is a big lesson we’ve learned here, which is that the ideal execution of an idea can be twice as effective as a prototype, or maybe even more. But the ideal implementation is never ten times better than an artful prototype. Polish won’t turn a negative signal into a positive one. Often, the ideal execution is barely better than a good prototype, from a measurement perspective.
I came across this while looking for some other information today. It is a humbling list in that escaping all these traps all of the time really does require discipline (I interpret ‘proof’ not just as mathematical reasoning but any scientific argument, including experimental work):
This article in latest issue of The Economist is an interesting summary of what has come to be possible using computational tools based on game theory. Although I was already aware of some of the famous examples such as FTC spectrum auctions, I am impressed by the suggestion that these tools are finally breaking out into the real world and consulting companies are deploying them more broadly. I wouldn’t be surprised if I were to encounter some version of the ‘negotiation assistant’ online, in the near future.
Meanwhile, for a more visual description of some of these ideas, see this video clip (thanks to my student, Behzad Tabibian, for pointing me to it):
This article in the New York Times points to some interesting research on how a problem could be perceived as a whole, i.e., perception for problem solving. I like this viewpoint of trying to classify the “type” of problem even before one starts to work out the details. This clearly must play a role in any bounded rational approach!
A theme that is increasingly gaining traction among designers of autonomous agents of various kinds is the idea that long term robust behavior requires a bank of diverse strategies, marshaled in any given instance to produce the desired decision or plan. So, if you are trying to build an autonomous robot, worry less about getting individual local behaviors perfected and focus more on the large scale structure of all the different tasks you expect to encounter, and how you will exploit properties of this global structure. If you are a trading agent, perhaps a bank of low complexity rules put together can do better (in the sense of robustness) than a more fancy but delicate model.
Recently, someone pointed me to work by the economist, Scott Page, on this theme but in a much broader context. Here is an NYT interview summarizing his view: http://www.nytimes.com/2008/01/08/science/08conv.html. He has also written books on the topic, e.g., Diversity and Complexity, Princeton University Press 2010. Interesting stuff.
I just read this review, by Kasparov, of a book on computer chess by Rasskin-Gutman (thanks to my friend, Vikram, for pointing me to this). It is a nice critical review and makes me want to read the book.
Two things that stood out from the article:
- Kasparov mentions an interesting experiment where a bunch of amateurs came out successful, against far more experienced and seasoned grandmasters, by using what he describes as “weak human + machine + better process” (which is better than “strong human + machine + inferior process”). I absolutely agree but would also note that this is not at all surprising to people who are actively engaged in AI research today. The notion of ‘brute force calculation’ that Kasparov discusses earlier in the article, and most lay people associate with AI, is actually a thing of the past (or, at least, not the only or even main game in town). Most people in AI today are quite focused on eliminating the boundary between machine and process as Kasparov imagines it.
- He concludes his article with the question of why people don’t focus anymore on the seemingly old fashioned questions that initiated computer chess. (I entirely agree with the sentiment in that I too am sometimes frustrated by the level of incrementalism that is forced upon us by the constraints of the time, and – perversely but I don’t think intentionally – by the successes of the time, e.g., the massive data approach of Google, which on the face of it seem to argue the futility of looking for deeper explanations/mechanisms of intelligence). He mentions poker as the alternative game of the times. I would, of course, add that a game like football captures everything he wants to say, and more – once we get beyond the naive objections that the game isn’t intellectual enough. Once we get far enough with this enterprise, good football has as much strategy as a good game of poker or chess (even more so due to the levels of incomplete and misleading information, e.g., my students and I are currently trying to develop a crisp model of the process of ‘marking’ – an extremely simple behaviour but far from trivial). But it also addresses all the questions associated with Moravec’s paradox and disarms the sceptics who object to games played “entirely inside the computer”.