Skip to main content
KBS_Icon_questionmark link-ico

In Conversation with: Professor Sandy Wells

SandyWells-144dpi

This month the School of Biomedical Engineering and Imaging Sciences has been delighted to host Professor William M. Wells III (aka Sandy Wells), Professor of Radiology at Harvard Medical School and Brigham and Women’s HospitalMember of the Affiliated Faculty of the Harvard-MIT Division of Health Sciences and Technology, and research scientist at the MIT Computer Science and Artificial Intelligence Laboratory.  Professor Wells and his groups have made seminal contributions in segmentation of MRI and in multi-modality registration. He was also involved in early work in intra-operative MRI and the development of the 3D Slicer software package.

Could you start by giving us an overview of your career?

I’ve been working in medical image analysis for 25 years, being one of the early adopters from the field of computer vision in 1993. Once I finished my PhD I secured a postdoctoral position at Brigham and Women’s Hospital where the field really took off. Although I think it was partly being in the right place at the right time, this allowed me to work through an exciting and transformative period in the field.

What current clinical challenge are you working on?

My primary appointment is in the MRI division of the Radiology department at the Brigham where our research focus is more on intervention than diagnostics – we do a lot of work in image-guidance, with emphasis on neurosurgery and prostate procedures.

In fact, we were among the first sites to have MR machines adopted for real-time use in surgery and we maintained that theme ever since.  As interventional MRI has proven too costly for widespread adoption we have been working on alternative methods for bringing pre-operative MRI to surgical procedures.

We’re now working with industry standard guidance systems produced by companies like BrainLab, but one of the current challenges we’re facing is improving image registration and its ability to compensate for tissue deformation. With image registration we use pre-surgical scans to identify and plan interventions and image registration is the tool we use to ensure the guidance systems link up with the data. However, when tissue is removed during procedures this changes the structure which can affect the quality of image registration.

I’ve been very focused on this challenge for a number of years, and we’re currently exploring the use of intra-operative ultrasound for estimating “brain shift” in neurosurgery. 

What brings you to King’s?

Whilst a month doesn’t sound too long we are working to establish more long-term collaborations between King’s and the Brigham, hopefully supported by some joint grants. We have a long-standing history with many of the academics here and I hope we can work together to help solve registration problems on your large-scale projects such as image-guided neurosurgery.  

How closely do you work with Brigham and Women’s Hospital and how important is this for your research?

The Surgical Planning Laboratory is in prime clinical space in the main thoroughfares of the radiology department, so I would say very closely. It’s been very important for our work over the years and the reason our lab has been able to stay relevant. In simple terms – we’re working on problems that the clinicians care about. 

That said, I do think it’s crucially important to make that relationship even stronger and more widespread across MIT and Harvard to improve technical and clinical feedback.   

I think your School has such an opportunity with its location in St Thomas’ Hospital. The senior level integration between technical and medical specialisms is really conducive to progress in medical AI, ensuring that the big data sets from the clinic can be exploited by the technical talent – I’m very interested to see how that works out for your research here. 

What do you envisage the next ‘big’ technical developments will be in healthcare?  

We’ve just spoken about AI but there has also been a lot of progress in the basic technologies supporting robotics, these advances are making it much more feasible to incorporate these advances into surgical devices. 

However, the two are certainly interlinked and any big breakthroughs in minimally invasive surgery will be combining AI and robotics to achieve unprecedented results. The flow of information we now have to train AI with will allow us to incorporate the genetics of patients, their tumours, and many other complex biomarkers which will personalize surgery in a way that will change its nature. We’re already beginning to use mass spectrometry in neurosurgery at the Brigham.  Biomarker feedback at the molecular level combined with the distilled knowledge from millions of patient’s data will enable new levels of precision medicine.  I think we will move beyond the current image-guided therapy to a more general information-guided surgery, ultimately with AI inference systems working alongside surgeons.

Any last thoughts for our readers? 

Whilst it’s no doubt exciting to be working through such an experimental time in AI & machine learning, we are moving towards the stage where we need to underpin this with robust intellectual foundations in academia. I’m beginning to work with Dr. Jorge Cardoso in developing a curriculum for a masters degree specialisation in Artificial Intelligence and Clinical Data Science, that will teach core foundation modules in areas such as probability theory. This is something we must begin investing in to ensure that we most effectively train the next generation of specialists in this area.