Skip to main content
KBS_Icon_questionmark link-ico
Oya Celiktutan SAIR Lab 1903x558 ;

Meet: Dr Oya Çeliktutan

Dr Oya Çeliktutan builds the kind of robots we want to be around: robots that can read social cues, respond appropriately, and stay calm under pressure.

"They are certainly noisy and busy spaces," Oya Çeliktutan says of hospital reception desks and corridors. They are also not a place where you want a clumsy robot blocking the way, speaking in the wrong tone, or misunderstanding those who need help the most.

For Oya, Reader at the Centre for Robotics Research in the Department of Engineering at King’s College London and Head of the Social AI & Robotics (SAIR) Lab, noisy hospital hallways filled with stressed people is but one of the important spaces where responsible AI fulfils a clear purpose.

Oya designs robots that interact with the real world: robots and virtual agents that can see, listen, and move in ways that make sense to the humans around them. Her research aims to develop robots that can read the room, adapt to their surroundings, and move through spaces without getting in the way or causing frustration. She achieves this through work on robot navigation in crowds, continual learning, social signal processing, and user modelling.

How do we design robots that are socially responsible and responsive across the full spectrum of everyday life?– Dr Oya Çeliktutan

One strand of that question is algorithmic fairness. When a robot is tasked with understanding how anxious or distressed a patient is by reading their facial expressions and voice, it matters enormously whose faces and voices it learned from during training. If the training data lacks diversity, the robot may misread people who don’t fit that profile. Oya’s recent work addresses this problem in two ways: first, by developing methods to detect bias in AI systems even when we don't have complete demographic information about the training data; and second, by finding ways to maintain fairness when a robot trained in one setting is moved to another with different populations.

When robots "look at" us, it is vital they don’t misread some of us more than others. Currently, AI systems are often more accurate at reading emotions and expressions from faces that are well-represented in their training data – typically lighter-skinned, younger faces – and less accurate with others. This is a critical concern if those robots are to be involved in triage, mental health screening, or allocating attention in busy public spaces.

Illustrated classroom scene showing young children interacting warmly with friendly cartoon robots and plush animal robots, seated at small desks and tables under a chalkboard labeled 'Care Bots Pilot Study'

Much of Oya’s current work is inspired by her project on care robots for children undergoing cancer treatment. Through a King’s Together Research Award, she has explored how socially assistive robots can support children’s mental health and quality of life in collaboration with clinicians and families. The Care Bots pilot, supported by Oya’s PhD student Shruti Shreya, is not about replacing human caregivers but offering comforting interactions, keeping a child engaged during long procedures and times of social isolation so that human staff and parents can better focus on the uniquely human parts of care. Here, Oya's work on fairness becomes directly relevant to ethical design. A robot that misreads a child's distress is not just "less accurate" but is a matter of psychosocial safety.

That same spirit underpins Miroka, a humanoid robot developed by Enchanted Tools, which Oya and her collaborators are helping to equip with social and physical artificial intelligence and will be piloting at Guy's and St Thomas' NHS Foundation Trust, with a preview planned at Digital Health Rewired 2026.

Manufacturing and sustainability present another area where Oya's research is making an impact. Here, "responsible robotics" means planning for remanufacturing and recycling, and designing data-driven methods to support a circular economy for robot hardware. Instead of treating robots as disposable assets in highly automated factories, Oya explores ways to treat them as long-lived infrastructure, with environmental and societal obligations attached. Whether a robot is navigating a crowded corridor or a production line, it has to adapt to changing conditions, learn over time, and respect constraints it perhaps cannot see directly, ranging from worker safety to material scarcity. The techniques Oya develops for one context – teaching robots to learn continuously, predict human behaviour, and navigate social spaces – turn out to work well in the other.

Underneath these flagship projects lies a dense mesh of technical contributions. Oya's group has produced award-winning work on conversational group detection and socially-aware navigation. Other projects tackle how robots can keep learning from previously unseen tasks without forgetting what they already know, and how they might "watch" human-human interactions to understand cooperative timing and nonverbal cues. In parallel to this foundational work, Oya and her team advance important research in explainable robotics, tackling the question of how robots can explain their actions to build trust.

Two people looking at a giant soft robot fern

And then there is the playful, public-facing side. In collaboration with Air Giants, Oya's team helped create Sprout, a giant soft robot that lived for six months at Science Gallery London and later formed part of King's outdoor Glowbot Garden in the Strand Aldwych pedestrianised area. The residency uses the theory of Proxemics - the study of how humans use space as a form of non-verbal communication - to explore how motion, distance and space might facilitate effective communication between humans and robots. Sprout responded to visitors' movements and proximity, creating playful interactions that helped people experience firsthand what it feels like to communicate non-verbally with a robot. It serves as a reminder that robots are not just tools but also cultural artefacts that people touch, photograph, and tell stories about.

Across all of this, for Oya, a philosophy emerges: robots should be powerful in what they can sense and learn, but modest in how they appear in our lives. They should help clinicians in overstretched hospitals, children navigating frightening treatments, and workers on factory floors but without demanding centre-stage. Designing such robots turns out to require a very specific mix of skills: deep expertise in how machines understand human behaviour, multi-modal perception models, continual learning algorithms, and a track-record of long-term collaborations with hospitals, artists, and industrial partners.

It also requires researchers who are willing to wear many different hats at any one time while holding onto a clear sense of social purpose. Responsible robotics is a team sport and Oya is quick to redirect credit to students and partners, showcasing another kind of responsibility and fairness.

Group of students and staff standing in a robotics lab behind three humanoid and service robots
The Social AI & Robotics Lab
Initiatives like the King’s Institute for Artificial Intelligence bring people together, open horizons and help ensure what gets built is useful for end-users, and not just impressive in a niche technical benchmark.– Dr Oya Çeliktutan

Ultimately, Oya says, this is what her research agenda is all about: building robots that are technically strong and socially literate, behave fairly and adapt sensitively, and slot into the realities of human experience. It's what "AI for the public good" looks like when you leave the lab and step into the messy spaces of everyday life.

In this story

Oya Celiktutan

Oya Celiktutan

Reader in AI and Robotics

Latest news