What do engineers assume about users’ ability to specify what they want?

I came across the following wonderful gems of tech history quotes in one of my recent readings:

The relational model is a particular suitable structure for the truly casual user (i.e. a non-technical person who merely wishes to interrogate the database, for example a housewife who wants to make enquiries about this week’s best buys at the supermarket).

In the not too distant future the majority of computer users will probably be at this level.

(Date and Codd 1975; 95)

Casual users, especially if they were managers might want to ask a database questions that had never been asked before and had not been foreseen by any programmer.

(Gugerli 2007)

As I was reading these, it occurred to me that many in my own area of robotics also often think in the same way. What would the proper abstractions look like in emerging areas such as robotics, mirroring the advances that have happened in the software space (e.g., contrast the above vision vs. Apple’s offerings today)? Our typical iPad-toting end user is going to speak neither SQL nor ROS, yet the goal of robotics is to let them flexibly and naturally program the machine – how can that actually be achieved?

References:

  1. C.J. Date, E.F. Codd, The relational and network approaches: Comparison of the application programming interfaces, Proc. SIGFIDET (now SIGMOD) 1974.
  2. D Gugerli, Die Welt als Datenbank. Zur Relation von Softwareentwicklung, Abfragetechnik und Deutungsautonomie, Daten, Zurich, Berlin: Diaphanes, 2007.

Smart watches are still pretty dumb

I have been looking into sensing technology (especially the wearable kind) with increasing interest. This is in part because of some current work I am involved with (e.g., our papers at IPSN ’13, and ACM TECS ’13, which were really initial forays), but more broadly because many of us are becoming convinced that persistent interaction between (wo)man and computational machines is a defining theme of the next decade or two of technology, and sensors are the mediating entities.

I have also felt for a while now that most sensors, even – especially? – when they are called ‘smart’ seem quite dumb actually. For instance, there is a wealth of papers that talk about smart this and that when what they really mean is just what an AI or robotics person would call reactive. In this context, I found this article in the MIT Technology Review to be interesting. As the author says,

After trying some smart watches, I’ve determined that a good one will need to be more than just reliable and simple to use—it will have to learn when and how to bother me. This means figuring out what I’m doing, and judging what bits of information among countless e-mails, app updates, and other alerts are most pressing. And, naturally, it must look good.

I think this is more than just a product design issue. This is fairly challenging to achieve properly even from a research point of view – requiring learning of models of human choice behaviour, etc. There is also the issue of designing good sensors that can be deployed, e.g., see this article. Lots to do, but it’s the kind of thing that’ll be fun to do!

Optimal vs Good enough – how far apart?

The Netflix Tech Blog has this very interesting piece which is very insightful: http://techblog.netflix.com/2011/01/how-we-determine-product-success.html.

 

In particular, this point is very important although easily and often very voluntarily ignored by so many researchers:

There is a big lesson we’ve learned here, which is that the ideal execution of an idea can be twice as effective as a prototype, or maybe even more. But the ideal implementation is never ten times better than an artful prototype. Polish won’t turn a negative signal into a positive one. Often, the ideal execution is barely better than a good prototype, from a measurement perspective.

Why do simple techniques work?

My past few posts have been driven by an underlying question that was pointedly raised by someone in a discussion group I follow on linkedin (if you’re curious, this is a Quant Finance group that I follow due to my interest in autonomous agent design and the question was posed by a hedge fund person with a Caltech PhD and a Wharton MBA):

I read Ernest Chan’s book on quantitative trading. He said that he tried a lot of complicated advanced quantitative tools, it turns out that he kept on losing money. He eventually found that the simplest things often generated best returns. From your experiences, what do not think about the value of advanced econometric or statistical tools in developing quantitative strategies. Are these advanced tools (say wavelet analysis, frequent domain analysis, state space model, stochastic volatility, GMM, GARCH and its variations, advanced time series modeling and so on) more like alchemy in the scientific camouflage, or they really have some value. Stochastic differential equation might have some value in trading vol. But I am talking about quantitative trading of futures, equities and currencies here. No, technical indicators, Kalman filter, cointegration, regression, PCA or factor analysis have been proven to be valuable in quantitative trading. I am not so sure about anything beyond these simple techniques.

This is not just a question about trading. The exact same question comes up over and over in the domain of robotics and I have tried to address it in my published work.

My take on this issue is that before one invokes a sophisticated inference algorithm, one has to have a sensible way to describe the essence of the problem – you can only learn what you can succinctly describe and represent! All too often, when advanced methods do not work, it is because they’re being used with very little understanding of what makes the problem hard.  Often, there is a fundamental disconnect in that the only people who truly understand the sophisticated tools are tools developers who are more interested in applying their favourite tool(s) to any given problem than in really understanding a problem and asking what is the simplest tool for it. Moreover, how many people out there have a genuine feel for Hilbert spaces and infinite-dimensional estimation while also having the practical skills to solve problems in constrained ‘real world’ settings? Anyone who has this rare combination would be ideally placed to solve the complex problems we are all interested in, whether using simple methods or more sophisticated ones (i.e., it is not just about tools but about knowing when to use what and why). But, such people are rare indeed.