This is the primary question shaping much of the public debate and discourse around the development of autonomous systems technologies, robots being the most visible and eye catching example – occasionally quite scary when you see them being used, e.g., as carriers of weapons and bombs. As a roboticist who is often deeply ensconced in the technical side of developing capabilities, I find most public articles in the popular media to be ill-informed and hyperbolic. So, I was pleasantly surprised to read this McKinsey report. This is not quite popular media, but in the past I have found even some of these consulting company reports to be formulaic. The key point being made by the authors is that just because something can be automated does not mean it should, or that it will. In reality, a variety of different factors, including economic and social will shape the path of these technologies – something all of us should pragmatically consider.
This key point is summarised in the following excerpt:
Technical feasibility is a necessary precondition for automation, but not a complete predictor that an activity will be automated. A second factor to consider is the cost of developing and deploying both the hardware and the software for automation. The cost of labor and related supply-and-demand dynamics represent a third factor: if workers are in abundant supply and significantly less expensive than automation, this could be a decisive argument against it. A fourth factor to consider is the benefits beyond labor substitution, including higher levels of output, better quality, and fewer errors. These are often larger than those of reducing labor costs. Regulatory and social-acceptance issues, such as the degree to which machines are acceptable in any particular setting, must also be weighed. A robot may, in theory, be able to replace some of the functions of a nurse, for example. But for now, the prospect that this might actually happen in a highly visible way could prove unpalatable for many patients, who expect human contact. The potential for automation to take hold in a sector or occupation reflects a subtle interplay between these factors and the trade-offs among them.
My student, Stavros Gerakaris, has been working on applying our multi agent learning ideas to the domain of Ad Exchanges. He is participating in the Sixteenth Annual Trading Agent Competition (TAC-15), conducted as part of AAMAS-15. His entry, entitled Edart, finished 2nd among the participants and 5th overall, the last year’s winners still did better than all of us. This earns us a spot in the finals. If you’d like to know more about the background and setup of this competition, see this paper by Mariano Schain and Yishay Mansour.
People familiar with AMEC/TADA will realise that the main objective of these competitions is to try out our original ideas in a demanding open ended domain. In this sense, I am especially pleased that this agent had begun to validate our more theoretical work in the form of the Harsanyi-Bellman Ad-hoc Coordination algorithm, originally developed by Stefano Albrecht, which Stavros is using in a partially observable and censored observation setting. In due course, this work will appear as a publication, so watch that space in our publications list.
I came across this in my twitter feed today, and find it really interesting. Gartner, Inc., a leading research and advisory company, routinely put together a report evaluating the relative readiness and ‘hype level’ of various technologies. The 2013 report includes the following nice visualisation.
As is perhaps the case for most academic researchers, many of the things I am working on fall in the innovation trigger category, inching into the peak of inflated expectations. It is worth keeping in mind the cautionary note that this only means we are a few years from the ‘trough of disillusionment’ – so, unless we have things to show by then, the reception could be rather cold!
In the eloquent words of Nietzsche:
“Oh heaven over me, pure and high! That is what your purity is to me now…that to me you are a dance floor for divine accidents, that you are to me a divine table for divine dice and dice players. But you blush? Did I speak the unspeakable?”
– Also Sprach Zarathustra, Third Part, Before Sunrise.
I quite liked this list, which was Thomas Sargent’s graduation speech at Berkeley (Thanks to Rahul Savani for pointing me to it). The most important point, of course, being that “Economics is organized common sense”.
- Many things that are desirable are not feasible.
- Individuals and communities face trade-oﬀs.
- Other people have more information about their abilities, their efforts and their preferences than you do.
- Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always end up working as intended.
- There are trade-offs between equality and efficiency.
- In an equilibrium of a game or an economy, people are satisﬁed with their choices. That is why it is diﬃcult for well meaning outsiders to change things for better or worse.
- In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation.
- Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made.
- It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore).
- When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inﬂation.
- Most people want other people to pay for public goods and government transfers (especially transfers to themselves).
- Because market prices aggregate traders’ information, it is difficult to forecast stock prices and interest rates and exchange rates.
This article in latest issue of The Economist is an interesting summary of what has come to be possible using computational tools based on game theory. Although I was already aware of some of the famous examples such as FTC spectrum auctions, I am impressed by the suggestion that these tools are finally breaking out into the real world and consulting companies are deploying them more broadly. I wouldn’t be surprised if I were to encounter some version of the ‘negotiation assistant’ online, in the near future.
Meanwhile, for a more visual description of some of these ideas, see this video clip (thanks to my student, Behzad Tabibian, for pointing me to it):
A theme that is increasingly gaining traction among designers of autonomous agents of various kinds is the idea that long term robust behavior requires a bank of diverse strategies, marshaled in any given instance to produce the desired decision or plan. So, if you are trying to build an autonomous robot, worry less about getting individual local behaviors perfected and focus more on the large scale structure of all the different tasks you expect to encounter, and how you will exploit properties of this global structure. If you are a trading agent, perhaps a bank of low complexity rules put together can do better (in the sense of robustness) than a more fancy but delicate model.
Recently, someone pointed me to work by the economist, Scott Page, on this theme but in a much broader context. Here is an NYT interview summarizing his view: http://www.nytimes.com/2008/01/08/science/08conv.html. He has also written books on the topic, e.g., Diversity and Complexity, Princeton University Press 2010. Interesting stuff.