Code Yourself!

My friend, Areti Manataki, is one of the co-organisers of this excellent MOOC on Coursera, entitled “Code Yourself! An Introduction to Programming“. As the blurb on Coursera says, “Have you ever wished you knew how to program, but had no idea where to start from? This course will teach you how to program in Scratch, an easy to use visual programming language. More importantly, it will introduce you to the fundamental principles of computing and it will help you think like a software engineer.”

I like the emphasis on basics, and the desire to reach the broad audience of pre-college children. Many MOOCs I encounter are just college courses recycled. Instead, if MOOCs are to matter, and if they are to matter in the ways MOOCs are ambitiously advertised – i.e., in the developing world and in pursuit of helping new students who would not be otherwise served by existing formal programmes, this is the kind of entry point from which I’d expect to see progress.

I  made a small contribution to this course, by giving a guest interview about our work with the RoboCup project – as a case study. If you go to this course, you’ll find this under Unit 3 as “(Optional Video) Interview on football-playing robots [08:41]“.

On blue sky work…

Useful perspective to keep in mind for the next time one receives unfairly critical comments about speculative work:

Successful research enables problems which once seemed hopelessly complicated to be expressed so simply that we soon forget that they ever were problems. Thus the more successful a research, the more difficult does it become for those who use the result to appreciate the labour which has been put into it. This perhaps is why the very people who live on the results of past researches are so often the most critical of the labour and effort which, in their time, is being expended to simplify the problems of the future.

Sir Bennett Melvill Jones, British aerodynamicist.

Are you doing what I think you are doing?

This is the title of a paper by my student, Stefano Albrecht, which we have recently submitted to a conference. The core idea is to address model criticism, as opposed to the better studied concept of model selection, within the multi agent learning domain.

For Informatics folks, he is giving a short talk on this paper on Friday at noon, to the Agents group (in IF 2.33). The abstract is below.

 The key for effective interaction in many multi-agent applications is to reason explicitly about the behaviour of other agents, in the form of a hypothesised behaviour. While there exist several methods for the construction of a behavioural hypothesis, there is currently no universal theory which would allow an agent to contemplate the correctness of a hypothesis. In this work, we present an novel algorithm which decides this question in the form of a frequentist hypothesis test. The algorithm allows for multiple metrics in the construction of the test statistic and learns its distribution during the interaction process, with asymptotic correctness guarantees. We present results from a comprehensive set of experiments, demonstrating that the algorithm achieves high accuracy and scalability at low computational costs.

Decision Making when there are Unknown Unknowns

Unknown unknowns are everywhere, a bit like dark matter in the universe. Yet, everything we seem to do in terms of algorithms for learning and inference either assumes a simplified setting that is closed in terms of the hypothesis space (hence not even allowing for these unknown unknowns), or depends on our being able to setup such generally expressive priors that computation is far from tractable. How do real people really bridge the gap? We don’t know, of course, but we have started to take a stab at this from a different direction. With my colleague, Alex Lascarides, and my former student, Benji Rosman, we have been looking into this issue in a specific setting – that of asking how an agent incrementally grows its model to reach the level of knowledge of a more experienced teacher, while dealing with a world that requires our agent to expand its hypothesis space during the process of learning and inference.

This is very much ongoing work, of the kind wherein we have an idea of where we might like to end up (a lighthouse on the horizon) with only a very limited idea of the way there, and the nature of the rocky shores we’ll need to navigate to get there. A status report on the current state of this work, for local Informatics folks, would be an upcoming talk as part if the DReaM talks (10th March, 11:30 am in room IF 2.33) – abstract below.

—————————————————————————————-

Decision Making when there are Unknown Unknowns

Alex Lascarides

Joint work with Ram Ramamoorthy and Benji Rosman

Existing approaches to learning how to solve a decision problem all
assume that the hypothesis space is known in advance of the learning
process.  That is, the agent knows all possible states, all possible
actions, and also has complete knowledge of his or her own intrinsic
preferences (typically represented as a function from the set of
possible states to numeric award).  In most cases, the models for
learning how to behave optimally also assume that the probabilistic
dependencies among the factors that influence behaviour are known as
well.

But there are many decision problems where these high informational
demands on learning aren’t met.  An agent may have to act in the
domain without known all possible states or actions or with only
partial and uncertain information about his or her own preferences.
And yet if one changes the random variables one uses to represent a
decision problem, or one changes the reward function, then this is
viewed as a different and unrelated decision problem.  Intuitively,
one needs a logic of change to one’s decision problem, where change is
informed by evidence.

I will present here some relatively half-baked ideas about how to
learn optimal behaviour when the agent starts out with incomplete and
uncertain information about the hypothesis space: that is, the agent
knows there are `unknown unknowns’.  The model is one where the agent
adapts the representation of the decision problem, and so revises
calculations of optimal behaviour, by drawing on two sources of
evidence: their own exploration of the domain by repeatedly performing
actions and observing their consequences and rewards; and dialogues
with an oracle who knows the true representation of the decision
problem.

Our hypothesis is that an agent that abides by certain defeasible
principles for adapting the representation of the decision problem to
the evidence learns to converge on optimal behaviour faster than an
agent who ignores evidence that his current representation entails the
wrong hypothesis space or intrinsic rewards, or an agent who adapts
the representation of the decision problem in a way that does not make
the defeasible assumptions we’ll argue for here.

Dagstuhl talk – Learning action-oriented symbols

A few months back, I attended a Dagstuhl workshop on Neural-Symbolic Learning and Reasoning – a meeting that tried to bring together people looking at sub-symbolic (primarily but not exclusively neural network) and symbolic (i.e., logic based) learning and  reasoning.

I gave a talk on “Learning Action-oriented Symbols: Abstractions over Decision Processes”. This was an attempt at synthesising the idea behind a couple of different papers we have worked for over the past two years.

The report associated with this workshop has now been released: http://drops.dagstuhl.de/opus/volltexte/2015/4884/. It is an interesting collection of ideas, especially if you follow the links to the primary publications and associated background materials.

(Next) 100 Years of AI

There is a very nice initiative hosted at Stanford University, launching a broad based study of the progress and impact of AI: http://cacm.acm.org/news/181386-stanford-to-host-100-year-study-on-artificial-intelligence/fulltext

A somewhat fuller description of the aims and scope of this study can be found here: https://stanford.app.box.com/s/266hrhww2l3gjoy9euar

I like the fact that this list of topics is suitably expansive and forward looking. Some of these themes are quite close to home, e.g., Collaborations with machines and Psychology of humans. There is also appropriate recognition of philosophically basic issues around AI – something that is constantly being pushed into the corner by the more shortsighted views of people who seem to be driving policy in funding agencies and so on.

Sensors on surfers

This video from Red Bull Sports describes a very interesting array of technology, sensing modalities to be precise, applied to the problem of better understanding the athletes’ physiology and performance:

http://www.redbull.com/us/en/surfing/stories/1331686821979/this-is-what-happens-when-scientist-go-surfing

Jake MArshall - Action - Surf Science

© Seth de Roulet/Red Bull Content Pool

Many of these sensors are becoming increasingly common. Certainly, in robotics labs like mine, we are routinely using motion tracking technology of various kinds, connecting tracked traces to motion analysis and so on. The way the Red Bull team have used pressure sensing footwear and the way they have setup the UAV to do personal tracking is handled very nicely.

Beyond that, although we are also beginning to play with eye tracking and EEG, I have not yet seen someone apply it in such a physically demanding environment. I’d love to know more about how the technology was actually deployed – we are keen to similar things in our own applications. Of course, the video says little about what was actually obtained from the analysis, e.g., the EEG sensor is only able to get a very coarse frequency domain measurement about the alpha-waves, it appears. Are they actually able to get meaningful interpretations from this sensor? For that matter, are they able to get genuinely useful insights from the SMI eye tracker?

Cool stuff!