Skip to main content

23 June 2022

Helping robots to understand people - the work of the Social AI & Robotics Lab

Robots and humans

SAIR team
The team working in the Social AI & Robotics Lab

Dr Oya Celiktutan from the Department of Engineering is Head of the Social AI & Robotics lab (SAIR) which is working to develop robots that can understand and adapt to human behaviour. She updated us on the current work of the lab.

What are you setting out to do?

The vision of SAIR is to transform human daily life with robots, for assisting them at home, work, and public spaces. To achieve this ambitious goal, my team focuses on building cutting-edge machine learning algorithms to enable robots to perceive and interact with humans and their environment.

Who are you working with?

My lab has been successful in securing funding from EPSRC, Royal Society, and industrial partners, as well as in establishing strong collaborations in the academic community.

What are you working on?

Recent developments include:

Learning to generate human-inspired behaviours for robots: For robots to be successful in human environments, they need to engage in interactions in a human-like manner, with higher levels of autonomy. Despite the exponential growth in the fields of human-robot interaction and social robotics, the capabilities of current social robots are still limited. Most robots rely on labour intensive and impractical techniques such as teleoperation, whereby a human operator controls the robot remotely. Moreover, designing interaction logic by manually programming each behaviour is notoriously difficult, considering the complexity of social interaction.

I think that modelling human behaviour and interactions is the most natural guide to designing human-robot interaction interfaces, and my EPSRC New Investigator Award project aims to set the basis for the next generation of robots that can learn simply by watching humans. Together with my Post-Doctoral Research Associate, Dr Tan Viet Tuyen Nguyen, we developed a novel approach for forecasting human behaviours during dyadic interactions, and our approach received the Honourable Mention Award at ICCV Understanding Social Behaviour in Dyadic and Small Interactions Challenge 2021. Now, we are focusing on novel methods to transfer the built interactive behavioural models to the perception and control of robots, and we are partnering with one of the leading robotics companies, SoftBank Robotics Europe.

Deep reinforcement learning: Equipping robots with adaptive behaviours is very challenging, therefore, many potential applications of robotics in open and dynamic environments are currently impractical. Reinforcement learning offers a framework to automatically retrieve behaviour from experience, enabling the automation of a wide range of complex tasks. However, one limitation is that most research in the field has focused on exploiting hand-designed reward functions.

To address such limitations, my PhD student, Edoardo Cetin, and I work on developing novel reinforcement learning algorithms that can incorporate a diverse range of learning signals to make an agent recover a robust understanding of the world, with few assumptions about environment instrumentation and user knowledge, which has resulted in two core publications at International Conference on Learning Representations (ICLR) and International Conference on Machine Learning (ICML). We introduced a novel algorithm called DisentanGAIL to solve the observational (visual) imitation learning problem. We introduced a novel framework, called Routines, which improves the performance and computational efficiency of two widely used reinforcement learning algorithms. ICLR and ICML are ranked as A* according to Core Research & Education (CORE) 2021. ICLR (ranked as 3rd) and ICML (ranked as 7th) are among the top 20 publications in the category of Engineering and Computer Science.

In addition to top-quality papers, our research in deep reinforcement learning has been supported by Toyota Motor Europe, including the contribution of a mobile robot (HSR – Human Support Robot) for our research. You can read more about this work here.

We have also undertaken a collaboration with the University of Oxford, and we collaborated on a paper titled “Stabilizing Off-Policy Deep Reinforcement Learning from Pixels”, which has been accepted to the ICML 2022 recently.

Social robot navigation: In my research lab, we aim to make robots autonomously learn through interactions with humans in the real world. Together with my PhD student, Viktor Schmuck, we introduced the first-of-its-kind dataset for robot perception and navigation, called Robocentric Indoor Crowd Analysis (RICA) dataset. The RICA dataset was collected during Engineering Launch Celebration in November 2019 using the robot’s onboard sensors only. So far, our work with RICA has resulted in three publications and received the NVIDIA CCS Best Student Paper Award Runner Up at the IEEE International Conference on Automatic Face and Gesture Recognition 2021.

Continual robot learning: Due to the growing need for personalised systems that can adapt to dynamic environments and can continually learn new tasks, modern deep neural networks are not adequate as they suffer from catastrophic forgetting – when continuously updated using novel incoming data, the updates can override knowledge acquired from previous data. Continual learning aims to design systems that can keep learning new knowledge while maintaining the performance for the previously learned tasks. However, so far approaches considering a robotics context are scarce. Together with my PhD student, Jian Jiang, we focus on developing novel continual learning algorithms by taking into account the limitations of the robotics platforms (e.g., space and computational power), which recently received the NVIDIA Hardware Grant Award.

 

In this story

Oya Celiktutan

Senior Lecturer in Engineering (Robotics)

PhD student

PhD student

PhD student