This is the primary question shaping much of the public debate and discourse around the development of autonomous systems technologies, robots being the most visible and eye catching example – occasionally quite scary when you see them being used, e.g., as carriers of weapons and bombs. As a roboticist who is often deeply ensconced in the technical side of developing capabilities, I find most public articles in the popular media to be ill-informed and hyperbolic. So, I was pleasantly surprised to read this McKinsey report. This is not quite popular media, but in the past I have found even some of these consulting company reports to be formulaic. The key point being made by the authors is that just because something can be automated does not mean it should, or that it will. In reality, a variety of different factors, including economic and social will shape the path of these technologies – something all of us should pragmatically consider.
This key point is summarised in the following excerpt:
Technical feasibility is a necessary precondition for automation, but not a complete predictor that an activity will be automated. A second factor to consider is the cost of developing and deploying both the hardware and the software for automation. The cost of labor and related supply-and-demand dynamics represent a third factor: if workers are in abundant supply and significantly less expensive than automation, this could be a decisive argument against it. A fourth factor to consider is the benefits beyond labor substitution, including higher levels of output, better quality, and fewer errors. These are often larger than those of reducing labor costs. Regulatory and social-acceptance issues, such as the degree to which machines are acceptable in any particular setting, must also be weighed. A robot may, in theory, be able to replace some of the functions of a nurse, for example. But for now, the prospect that this might actually happen in a highly visible way could prove unpalatable for many patients, who expect human contact. The potential for automation to take hold in a sector or occupation reflects a subtle interplay between these factors and the trade-offs among them.
If you are in the UK, you know about the intense ongoing debate regarding the referendum on whether to leave the EU. Scientists for EU has organised a letter that represents the view of many in the UK scientific community, which you might consider adding your support to: http://scientistsforeu.uk/sign-save-science/.
The core argument being put forth is as follows:
Scientific advance and innovation are critically dependent on collaboration. To remain a world-leading science nation, we must be team players.
The EU leads the world in science output, is beating the US in science growth – and is rapidly increasing investment in research. The EU is a science superpower. Our place in this team has boosted our science networking, access to talent, shared infrastructure and UK science policy impact. The economy of scale streamlines bureaucracy and brings huge added value for all. International collaborations have 40% more impact than domestic-only research.
Strong science is key for our economy and quality of life. It creates a virtuous cycle, leveraging investment from industry, raising productivity and creating high-value jobs for our future. In fact, 20% of UK jobs currently rely on some science knowledge. Science brings better medicines, cleaner energy, public health protections, a safer environment, new technologies and solutions to global challenges.
If we leave the EU, the UK will lose its driving seat in this world-leading team. Free-flow of talent and easy collaboration would likely be replaced by uncertainty, capital flight, market barriers and costly domestic red-tape. This would stifle our science, innovation and jobs.
It is no surprise that a recent survey showed 93% of research scientists and engineers saying the EU is a “major benefit” to UK research. The surprise is that many voters are still unaware that UK science and its benefits would be demoted by a vote to leave.
We, the undersigned, urge you to seriously consider the implications for UK science when you vote in the referendum on UK membership of the EU.
This is the title of a feature article written by RAD Alumnus, Stefano Albrecht, in the recent issue of EUSci magazine (see pp. 22):
The question is of course much talked about, which several high profile advocates answering yes and no. For me, the most interesting outcome of all these debates is the observation that there is the need for focus “not only on making AI more capable, but also on maximizing the societal benefit of AI” (words taken from an Open Letter drafted by the Future of Life Institute, and signed by many AI researchers including yours truly). From a scientific perspective, even clearly distinguishing between these two objectives is something of an open issue!
The following anecdote came in a posting to one of the mailing lists I subscribe to, on decision theory. The message of course is quite domain independent, and in many ways transcends time too!
On Christmas Eve 1874, Tchaikovsky brought the score of his Piano Concerto no. 1 to the renowned pianist and conductor and the founder of the Moscow Conservatory, Nikolai Rubinstein, for advice on how to make the solo part more effective. This is how Tchaikovsky remembers it.
“I played the first movement. Not a single word, not a single comment! … I summoned all my patience and played through the end. Still silence. I stood up and asked, ‘well?’’’
“Then a torrent poured forth from Nikolai Gregorievich’s mouth… My concerto, it turned out, was worthless and unplayable – passages so fragmented, so clumsy, so badly written as to be beyond rescue – the music itself was bad, vulgar – here and there I had stolen from other composers – only two or three pages were worth preserving – the rest must be thrown out or completely rewritten…”
‘I shall not alter a single note’ I replied, I shall publish the work exactly as it stands!’ And this I did.”
The moral of the story: If you believe in the merits your work, don’t let a bad referee report get you down. Listen to Tchaikovsky’s Piano Concerto no. 1 to lift your spirit and move on.
There is a very nice initiative hosted at Stanford University, launching a broad based study of the progress and impact of AI: http://cacm.acm.org/news/181386-stanford-to-host-100-year-study-on-artificial-intelligence/fulltext
A somewhat fuller description of the aims and scope of this study can be found here: https://stanford.app.box.com/s/266hrhww2l3gjoy9euar
I like the fact that this list of topics is suitably expansive and forward looking. Some of these themes are quite close to home, e.g., Collaborations with machines and Psychology of humans. There is also appropriate recognition of philosophically basic issues around AI – something that is constantly being pushed into the corner by the more shortsighted views of people who seem to be driving policy in funding agencies and so on.
The Longitude Prize is an interesting concept. Way back in 1714, it was an award for the person(s) who best solved the difficult problem of determining the ship’s longitude – in that day and age, a real grand challenge, one that literally determined the safety and lives of plenty of people!
Today, the problems of clocks and chronometers may seem quaint, but the notion of defining some big problems that need solving remains interesting. All the more so, in my local context, given the short-termism of so many sources of funding that are realistically available to researchers.
So, the announcement today that there is a new version of this prize is interesting indeed. They are all big societal issues, but one of them stands out in particular as having something to do with problems I am professionally interested in. There is a challenge associated with Dementia – How can we help people with dementia live independently for longer?
Assisted living advocates, including within the robotics and AI communities have built systems associated with this problem before, but can all that be lifted up to the standards of the longitude prize? What are the substantial questions that still remain un answered in this area? Useful things to ponder…
I came across this in my twitter feed today, and find it really interesting. Gartner, Inc., a leading research and advisory company, routinely put together a report evaluating the relative readiness and ‘hype level’ of various technologies. The 2013 report includes the following nice visualisation.
As is perhaps the case for most academic researchers, many of the things I am working on fall in the innovation trigger category, inching into the peak of inflated expectations. It is worth keeping in mind the cautionary note that this only means we are a few years from the ‘trough of disillusionment’ – so, unless we have things to show by then, the reception could be rather cold!