Skip to main content
KBS_Icon_questionmark link-ico
informatics_feature ;

Meet our new researchers from the Department of Informatics

Our interview series introduces new academic staff who started this academic year in the Faculty of Natural, Mathematical & Engineering Sciences.

In the first instalment, we spoke to Dr Gerard Canal, Dr Michael Cook and Dr David Watson from the Department of Informatics about their research, who inspires them in their field, and the future of artificial intelligence.

geradcanal250png

Dr Gerard Canal is a Lecturer in Autonomous Systems and a RAEng UK IC Postdoctoral Research Fellow in the Department of Informatics. He works closely with Dr Andrew Coles and Dr Oya Celiktutan on cutting-edge research projects aimed at developing assistive robotics, improving human-robot interaction and applying AI planning to the field of robotics.

 

What first attracted you to the field of Artificial Intelligence?

When I was a kid I assembled a small robot with my father, with pieces arriving by weekly instalments to our local kiosk. It felt very exciting to see that thing moving based on a program I made.

When I was studying my Bachelor of Science (in Computer Science) I joined a Robocup@Home team, where I learned more about robotics.

From there, I decided to study an MSc in Artificial Intelligence, because I wanted robots to become more autonomous, and more useful.

Is there a scientist in history, or today, who is your biggest inspiration/role model? And why?

I have had the luck of working with many great scientists, starting from supervisors I had in my BSc and MSc (Professors Cecilio Angulo and Sergio Escalera), to my PhD (Dr Guillem Alenyà and Professor Carme Torras).

I'm also inspired by the colleagues and collaborators I've had since, such as Dr Michael Cashmore, Daniele Magazzeni, Oya Celiktutan, Rita Borgo, Matteo Leonetti, and Andrew Coles. I've learned a lot from all of them.

Tell us about something you are working on at the moment - what is exciting about it?

I'm currently looking into ways to extend robot autonomy in the context of a task (i.e., doing household chores). This means the robots can run on their own for longer.

When the robot analyses the task that they have to perform, the robot comes up with potential goals to achieve this. We're trying to find smart ways of getting the robot choose the goals that are interesting. Later on, I want to use this research to explain the robot decisions to the user, as we'd all want to know why our robot is doing what it is doing!

Please give us an example of AI enhancing everyday life in 2023 that you particularly like.

From robotic vacuum cleaners to navigation apps, there are many examples of applications that (hopefully) make our lives a bit easier. One of my favourites is real-time language translation, which allows people speaking completely different languages to understand each other.

More recent systems based on Large Language Models (LLMs), such as ChatGPT, are now getting a lot of attention. I see a huge potential there in robotics applications, however I see them particularly useful to as a knowledge base for the robots to leverage.

What do you think is the biggest misconception people have about AI?

That people believe it's way more advanced than it actually is. We've had many impressive advances in the recent years, but these systems are not as intelligent as you'd think (or that sci-fi movies inspire.) I feel this is particularly evident in the case of robotics, where people participating in research experiments sometimes expect way more than they can do.

 
Mike cook 250

Dr Michael Cook is a Senior Lecturer in the Department of Informatics with a research focus on computational creativity and applications of AI to game design and development. Through his work in computational creativity, Dr Cook is helping to pave the way for the next generation of intelligent machines that can create and innovate in ways previously thought impossible.

What first attracted you to the field of Artificial Intelligence?

Artificial intelligence is all about making computers do new, weird and difficult things. I think that's what attracted me - the chance to try things that might be impossible, silly or strange. I like that AI is always changing, and that's only become more obvious in the last ten years.

Is there a scientist in history, or today, who is your biggest inspiration/role model? And why?

I'm really inspired by people like Dr Timnit Gebru who are willing to have difficult conversations with people about our research and take stands on important issues. I think people like her help remind me that this job isn't just about papers and code - it's about people, impact on society, and our responsibility to everyone.

I'm also really inspired by Professor Ursula Martin, (who is currently my mentor) as part of the Royal Academy of Engineering. Professor Martin has worked in many different fields, and is always open to new ideas. I love that interdisciplinarity and experimental mindset, and I try to adopt it in my own work.

Tell us about something you are working on at the moment - what is exciting about it?

I'm currently working on Puck, an AI game designer that I'm hoping will help enable more people to experiment with creative AI for games. You can watch Puck design and test games, and then try out what it makes and give it feedback. I love building AI like this, because they always surprise you - no matter how predictable you think they are, even as the person who programmed it, I'm being surprised all the time.

Please give us an example of AI enhancing everyday life in 2023 that you particularly like.

I think automatic captioning is a nice advance that's improving accessibility for a lot of people. There are a lot of issues with the technology - captioning works best for American accents speaking English, for example - but it's increasing accessibility for people to online videos, streaming, video messaging and more.

What do you think is the biggest misconception people have about AI?

That certain changes are inevitable. Technology is not a straight path, it has lots of branches and choices, and even if a particular technology seems to be "the future", we can always change or choose differently. That's why the public and scientists need to work more closely together so we can decide what kind of future we want to build with AI.

 
David watson250

Dr David Watson is a Lecturer in the Department of Informatics who obtained his doctorate from the University of Oxford. His research focuses on Machine Learning and Causality.

What first attracted you to the field of Artificial Intelligence?

I was initially drawn to artificial intelligence by my fascination with human cognition, which is so complex and varied. That said, my academic trajectory has been a bit unusual - I originally started my studies in philosophy. I took a philosophy of computation course as an undergraduate. The class got me totally hooked.

Is there a scientist in history, or today, who is your biggest inspiration/role model? And why?

I suppose among all time scientists we are more or less obligated to state the obvious – Einstein was an unrivalled genius, who combined visionary creativity with mathematical rigour to achieve revolutionary results. However, to cite someone a bit closer to my own field, I would say that Judea Pearl’s work on causality has been a major inspiration. His writing is exceptionally clear, and the ideas he’s introduced – the causal hierarchy, do-calculus, structural causal models – are absolutely fundamental to contemporary practice.

Tell us about something you are working on at the moment - what is exciting about it?

I recently wrote a paper with some colleagues at the University of Bremen introducing something we call “adversarial random forests”. Basically, we propose a new approach to generative modelling – the same class of algorithms that powers ChatGPT, DALL-E2, etc. – specifically designed for unstructured, tabular data. Text and image data, by contrast, is highly structured.

I think the project is super exciting for a lot of reasons. Perhaps the most important is that the method is so fast and simple, practitioners with little to no machine learning expertise can easily use it on their datasets. This will hopefully bring generative modelling into new domains such as healthcare and economics, where deep learning approaches are often too complex and data-hungry for practical use.

Please give us an example of AI enhancing everyday life in 2023 that you particularly like.

I’m a big fan of text-to-image models such as DALL-E2 and stable diffusion. I’ve never been an especially gifted artist, much to my chagrin. But I absolutely love playing with these tools to create new images and bring random ideas to life. For instance, we used a DALL-E2-generated logo for our adversarial random forest software package.

What do you think is the biggest misconception people have about AI?

I’m very concerned about our tendency to anthropomorphise AI. I think that even when we “know” the model isn’t thinking or feeling things, we are still tempted into pretending that it is, especially with chatbots doing such a good job of mimicking human dialogue. This misconception can be ethically dangerous when models are deployed in high-risk settings such as healthcare and finance.

It’s important to remember that AI is just a tool, and the choice of how to use this technology is always up to humans. We can and should impose high standards for algorithmic fairness, accountability, and transparency.

In this story

Gerard Canal

Gerard Canal

Lecturer in Autonomous Systems

Michael Cook

Michael Cook

Senior Lecturer in Computer Science

David Watson

David Watson

Lecturer in Artificial Intelligence

Latest news