Postdoctoral Research Associate position available in Robotics and Machine Learning

Applications are invited for the position of Postdoctoral Research Associate in an EPSRC funded Robotics and Artificial Intelligence hub: Offshore Robotics for Certification of Assets (ORCA). This is a large project with numerous open positions, including one within my group on Intelligent Human-Robot Interaction with Explainable Artificial Intelligence. The position will be based in the School of Informatics at the University of Edinburgh. This is a fixed-term position available from 1 May 2018 until 31 March 2021.
The ORCA Hub is an ambitious initiative that brings together internationally leading experts from 5 UK universities with over 30 industry partners. Led by the Edinburgh Centre of Robotics (University of Edinburgh and Heriot-Watt University), in collaboration with Imperial College, Oxford and Liverpool Universities, this multi-disciplinary consortium brings its unique expertise in: Subsea, Ground and Aerial robotics; as well as human-machine interaction, innovative sensors for Non Destructive Evaluation and low-cost sensor networks; and asset management and certification.
The particular focus of the Postdoctoral Research Associate on this project will be on:
  1. Techniques to enable understanding causal structure of specific decisions, towards improved planning and policy learning, diagnosis and model/policy repair, including
    • Causal interpretation of opaque (machine learnt) models, reasoning about “why”, “but for”, etc.
    • Tools for (statistical) model criticism and repair
  2. Application of these techniques in robotic system testbeds, demonstrating this capability within an integrated system with live data.
The Research Associate will have opportunities for collaboration, both with a cohort of other similar Research Associates within the School and the world class faculty covering all areas of machine learning, AI and robotics.
Further details and a link to the online application portal are available at https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=043260

(see description for Project D).

Informal enquiries may be directed to me (Dr Subramanian Ramamoorthy) at s.ramamoorthy@ed.ac.uk.
Advertisements

RAD alumnus on Forbes 30under30 list

It is great to see former RAD student, Alesis Novik, and his colleague (also former Informatics student) Andrius Sutas feature the Forbes 30under30 list for Europe, for the work they are doing in their startup company, AimBrain. Incidentally, their lead scientist, Stathis Vafeias is also a RAD alumnus, so all the more reason we are rooting for their success!

As the Forbes description says, their “tech combines voice, facial and behavioral biometrics, combined with a deep learning engine, all designed to create an accurate profile of a user over time. That helps it determine when a fraudulent transaction is taking place. The firm works with large financial enterprises with hundreds of thousands to millions of users each.

Discussion at the Disruptive Innovation Festival

I recently participated in this event, organised by the Ellen MacArthur Foundation, on the provocatively titled topic of “Maybe the Robot Invasion is a Good Thing”. In fact, the event was more a discussion on all things AI and robotics, with host Jules Hayward, with an emphasis in the second half on the social impact of these technologies. The Festival organisers have recorded the session and made it available (I am told until shortly after Christmas). If you were interested in listening, here is the link:

https://www.thinkdif.co/sessions/maybe-the-robot-invasion-is-a-good-thing

Postdoctoral Research Positions within Offshore Energy Hub

Following the earlier announcement regarding the DARPA XAI project, we have further positions open within my group: https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=041917.

“Project F” pertains to Intelligent HRI with Explainable AI, including work on:

  1. Techniques to enable understanding causal structure of specific decisions, towards diagnosis and model/policy repair
    • Causal interpretation of opaque (machine learnt) models, reasoning about “why”, “but for”, etc.
    • Tools for (statistical) model criticism and repair
  2. Application of these techniques in robotic system testbeds, demonstrating this capability within an integrated system with live data.

Postdoctoral position in Reinforcement Learning and Explainable AI

Applications are invited for the position of Postdoctoral Research Associate in a project funded by DARPA within the Explainable Artificial Intelligence programme. The position will be based in the School of Informatics at the University of Edinburgh.

The project, titled COGLE: Common Ground Learning and Explanation, is in collaboration with Palo Alto Research Centre Inc. (PARC), Carnegie Mellon University, University of Michigan, Florida Institute for Human & Machine Cognition and the US Military Academy at West Point. The project aim is to develop a highly interactive sense-making system which is able to explain the learned performance capabilities of autonomous systems to human users. Towards this end, our group will focus on machine learning methods to construct decision policies for an autonomous system to perform a variety of missions, as well as methods to induce representations of the models and policies at varying levels of abstraction.

The particular focus of the Postdoctoral Research Associate on this project will be to develop (hierarchical) methods for task and motion planning, to develop methods for program induction and to connect the two in order to devise novel methods for reinforcement learning of policies that are also amenable to explanation and causal reasoning. Experiments will be carried out in domains including flight mission planning with Unmanned Aerial Vehicles (UAVs). The work in this project is synergistic to other similarly funded projects within our research group. So, if the Research Associate were so inclined, there is the opportunity to collaborate with other researchers who are applying similar methods to a variety of other robotics problems, ranging from laboratory scale mobile manipulation robots such as the PR2 to underwater, ground and aerial robots operating in hazardous environments such as offshore energy installations.

The post is available from 1 December 2017 for 3 years. The closing date for applications is 27 November 2017.
Further details and a link to the online application portal are available at

Interview with RAD student, Daniel Angelov

One of our students, Daniel Angelov, has spent the past few months at Xerox PARC as an intern. He has been doing interesting things associated with the COGLE project I mentioned earlier, part of the DARPA XAI programme.

The folks at PARC have put up this interview on their blog, in which Daniel talks about his work and time there:

http://blogs.parc.com/2017/10/meet-the-parc-intern-daniel-angelov-talks-robots-and-the-future/

Project COGLE, within the DARPA XAI Programme

We have been awarded one of the projects under the DARPA Explainable AI programme, to be kicked off next week. Our project, entitled COGLE (Common Ground Learning and Explanation), will be coordinated by Xerox Palo Alto Research Centre, and I will a PI leading technical efforts on the machine learning side of the architecture.

COGLE will be a highly interactive sense-making system for explaining the learned performance capabilities of an autonomous system and the history that produced that learning. COGLE will be initially developed using an autonomous Unmanned Aircraft System (UAS) test bed that uses reinforcement learning (RL) to improve its performance. COGLE will support user sensemaking of autonomous system decisions, enable users to understand autonomous system strengths and weaknesses, convey an understanding of how the system will behave in the future, and provide ways for the user to improve the UAS’s performance.

To do this, COGLE will:

  1. Provide specific interactions in sensemaking user interfaces that directly support modes of human explanation known to be effective and efficient in human learning and understanding.
  2. Support mapping (grounding) of human conceptualizations onto the RL representations and processes.

This area is becoming one that is increasingly being discussed in the public sphere, in the context of the increasing adoption of AI into daily lives, e.g., see this article in the MIT Technology Review and this one in Nautilus, both referring directly to this DARPA programme. I look forward to contributing to this theme!