Skip to main content

30 November 2022

Evidence, Respect and Truth by Dr Liat Levanon

Data, and algorithms, are playing an increasing role in criminal justice systems around the world. A new book seeks to examine whether we can rely solely on statistics in the pursuit of justice.

A portrait of a woman with dark hair, smiling, against a white and green background.
Dr Liat Levanon

In her new book, Evidence, Respect and Truth: Knowledge and Justice in Legal Trials, Dr Liat Levanon, Senior Lecturer in Law, argues that the notion of respect underpins the searches for justice and for human knowledge, and asks if conclusions based on statistical evidence are sound.

Rinku Yunusa, from the Law Communications team, caught up with Dr Levanon, to find out more.

Can you provide a summary of what the book is about? 

The book is about the 'big data' revolution and its implications for judgements that we make about individuals, especially, but not only in the legal system.

Our ability to gather and process huge amounts of data, and to then explain individual cases and make predictions about individual cases based on this data, has grown significantly.

This means that today, when we make judgements about individual cases, we have available to us much more evidence than we used to have, which is a good development.

But this evidence is also different from the small-scale data that we have been used to, and it is being responded to differently, and sometimes not very enthusiastically. Recall, for example, the negative response to the attempt to predict students' grades based on algorithms during the Covid 19 pandemic.

A black and white book cover showing three rolling dice.

Therefore, to be able to work with big data properly, we need to understand the difference between big data and the more traditional data we have been used to handling, and we need to consider the implications of this difference for various contexts of decision-making. This is what the book is about. The book focuses on the legal context, because the legal context provides a good laboratory for exploring these questions. 

What motivated you to write the book? 

I felt, like many others, that there was a dissonance between what seemed to be the reasonable thing to do, namely, to use big data without reservation in all contexts of decision making, and what most of us would end up doing, namely, to look for ways not to decide exclusively on such data.

And dissonance is a very interesting phenomenon; it often implies that under the surface there are some reasons that we have not yet recognised. I was curious to see what these reasons might be, and how they could instruct us in different contexts of decision-making. 

What makes the law particularly well equipped, or not, to deal with issues in relation to ‘big data’?

The law is complex enough to bring onto the surface the problems with big data: lawyers don't just seek to come as close as possible to the truth, as big data allows us to do.

Lawyers seek to uphold respect in situations of serious conflict in which respect is challenged; and this involves doing justice, and for reasons explored in the book, it also involves gaining proper human knowledge of the truth. The trouble is that once these are our aims, stand-alone big data cannot help us; the book shows that individual judgements based on big data are often inconsistent with the demands of respect, and they are, relatedly, inconsistent with the searches for justice and for human knowledge. Equipped with this thought, the book explores the contexts in which big data can be extremely helpful, and the context in which it reaches its limits. 

Do you think there is an inherent contradiction between value-led systems and algorithmic data?

I think it depends on the values that underlie the system. In some systems, the underlying values have nothing to do with respect. If, for example a system is entrusted with increasing citizens’ welfare beyond what they deserve (as persons or for other reasons), respect is not at play at all: our decision is based only on the value of increasing welfare beyond what respect demands. In such a system, it would therefore be perfectly legitimate to make decisions based on algorithms.

But there are also systems with different underlying values, where respect is in fact at play. Think, for example, of systems that decide who should enjoy a minimal level of welfare. In such systems, respect is at play. And when respect underlies the system, exclusive reliance on algorithms becomes problematic.

It’s interesting that you outline potential anomalies and inconsistencies with the use of algorithms when discussing respect, a value which underpins the concepts of human knowledge and justice. What role do you think algorithms, tech and AI will have in future decision making in relation to the search for justice and knowledge, and what issues do you foresee with this approach? 

I think that there is a strong push towards unconstrained use of AI and algorithms in decision-making, and we might find that AI is turning into our default decision-maker in a wide range of contexts. We are not there yet, because the technology is still limited, but the day might well come. And I think that in some contexts - especially those which concerned only with accurate predictions - this could have incredible advantages. Take the fight against climate change for example, under the assumption that we could have a good predictive AI-based model. But in other contexts that are governed by a complex set of values, relying exclusively on AI can be problematic; it can be inconsistent with those underlying values. The legal context is such, and this is what the analysis in the book seeks to highlight. 

And finally, in an ideal world the readership would have a broad a reach as possible, but who is this book intended for?

The book is intended primarily for lawyers who think about and work with evidence; and I think that it can also be read 'from the outside' by anyone who thinks about evidence-based decision-making and would like to observe the law as an example.