Skip to main content
KBS_Icon_questionmark link-ico
hands on a laptop next to a stethoscope on a table ;

A window of opportunity: designing AI systems that augment the multiple facets of professional judgment

Artificial Intelligence and Technology in Law
Professor Sylvie Delacroix

Director of the Centre for Data Futures

15 December 2025

Professor Sylvie Delacroix reflects on recent publications challenging the ‘regulation-first’ narrative in healthcare AI.

Something shifted for me during a recent conversation with GP colleagues, who described the following (authentic) scenario:

A GP I'll call Emily turns to ChatGPT on her way home to process a difficult patient interaction: she has the nagging feeling that maybe she overlooked something but can’t quite put her finger on it. She doesn’t want to bother her colleagues, who are overwhelmed with their own caseloads. Through this LLM-facilitated conversation, Emily teases out the intuitions underlying her sense of unease and feels more confident moving forward.

Emily's experience crystallises a concern I have been wrestling with for some time. Human-computer interaction research has long been dominated by a narrow understanding of (human) intelligence, focused on its deliberative components. Within value-loaded practices such as healthcare, education and justice, however, the intuitive underpinnings of professional judgment play an important role. They are crucial to these practices’ ability to evolve in light of emerging needs and aspirations. Sense-making conversations such as Emily’s are key to the ongoing refinement of those intuitions. If LLMs are going to be used as ‘sense-making’ partners (and not just fancy information providers), shouldn’t we consider their impact on those non-deliberative, intuitive aspects of our intelligence?

The timing problem

This realisation shaped two recent publications: one in BJGP Life, as well as contributions to a BMJ series on generative AI in clinical consultations. It also led my colleagues and I to challenge the editorial that accompanied that series.

The editorial emphasised that ‘successful implementation depends critically on robust governance frameworks’. Because it positions professional communities as passive recipients awaiting regulatory protection, I've come to see this regulation-first framing as problematic. By the time the (otherwise necessary) regulatory frameworks arrive, commercial incentives are likely to have frozen design choices.

Design choices still available

In our BMJ series contributions, my co-authors and I introduce the concept of ‘triadic care’ where clinicians, patients, and AI jointly shape clinical encounters. This framing helps surface design principles that remain achievable if professional communities engage now:

Tinkerability: Interfaces that let professional communities experiment with system configurations and adapt them to their needs. These configurations include not just preference settings, but parameters around how uncertainty is communicated and information prioritised. While the importance of highlighting an output’s potential incompleteness might come to the fore when interrogating an LLM for the purpose of evaluating the relative salience of diagnostic tools, the manner in which an LLM invites further reflection (potentially through ‘humility markers’) becomes central in Emily’s ‘sense-making conversation’ context.

Co-evolutionary development: Iterative refinement enabling both systems and professional practice to adapt together, rather than optimising AI to match current practice or forcing practice to conform to predetermined systems.

Why this matters beyond healthcare

Colleagues working in legal and educational contexts describe parallel challenges. Judges navigate interpretive uncertainty. Teachers assess not just what students know but how they engage with ideas.

The common thread: practitioners who otherwise actively shape their substantive work somehow accept passivity regarding tools reshaping their practice. We critique poor implementations but rarely articulate positive visions for alternative designs.

The window is closing

I don't know how long we have before standardisation locks in design paradigms. What I do know is that the ‘wait for regulation’ narrative risks serving interests other than those of professional communities. There is no doubt that regulation cannot fix tools built on impoverished conceptions of the multiple facets of professional intelligence. If we want AI that augments rather than diminishes situational awareness, we need to demonstrate what that looks like now.

Emily's conversation with ChatGPT on the way home worked because the system communicated uncertainty in ways that supported her reflection. We could make such features deliberate: designing for the art of navigating uncertainty rather than its elimination. We could build systems that enhance intuitive judgment rather than demanding its articulation.

These possibilities require professional communities to recognise their agency in shaping these tools while they are still plastic enough. The stakes are too high for another round of retrospective hand-wringing about how commercial pressures shaped technology against professional and patient interests.


Key Publications

In this story

Sylvie Delacroix

Sylvie Delacroix

Director of the Centre for Data Futures

Latest news