Skip to main content

21 May 2025

Our new study shows that RCTs can be conducted cheaply – but there are lessons to learn

Michael Sanders, Julia Ellingwood, Simon Bird, Isobel Harrop

Cheaper trials might not be as rigorous, but they can still provide valuable policy insights without breaking the bank

Young children playing with adults

One of the most common criticisms of randomised controlled trials (RCTs) is that they are time consuming and expensive, and that stretched public services with scarce budgets cannot afford this expense.

This is a critique to be taken seriously – every pound that is spent running an RCT is a pound that cannot go into running those public services. Every thirty or forty thousand pounds we spend on research is equivalent to a teacher’s salary. These are real costs, and we should not take them lightly.

Now, of course, there are strong arguments to the contrary. Not least of which is that research is skilled work and that researchers being employed is a good thing. Moreover, failing to fund research into finding out what works is a good way to find yourself, 10 years from now, still complaining that there isn’t any evidence for things. There is a false economy to saving money on research which would tell us where we should put our marginal pound, and knowing what we can safely stop doing.

Nonetheless, it is incumbent on us to try and run trials as cheaply as we’re able to (and no cheaper). This has been a focus of a project over the last couple of years, funded by the Cabinet Office, looking into how we can conduct RCTs more quickly and cheaply.

One project that was funded through this approach was a study we’ve just published with a local authority in England, who were looking at bolstering their early years support, in response to some disappointing figures on school readiness in their area. This study shows both the potential for how cheaper RCTs can be operationalised, while also demonstrating some key lessons to apply in future studies.

We worked with the local authority to design a randomised crossover trial, in which additional support – more time from speech and language therapists, social care, and so on – was provided to a randomly selected half of the early years settings during the first period of the trial, and then switched to the other half in the second period.

The design of the study was simple, and easy to explain, and it was executed well by the local authority, who ensured the services were delivered where they were supposed to be.

So what did we find? Overall, we find no effect of the intervention on the outcome measures of interest, likely due to the study being underpowered. There was significant attrition from the study at the endline data collection.

While many early years settings successfully provided baseline data and received the services at the planned time point, the lack of data at the end of the trial limited the opportunity to detect a robust measure of the impact of the study.

Here is where one of the key lessons comes in: we need to make better use of data that is already collected routinely, or we should invest in routine data collection which would give local authorities a better sense of what’s going on in their area, as well as making evaluation easier.

Nonetheless, we’ve demonstrated that randomised trials can be conducted cheaply, and with minimal input from researchers if the local authority is committed to conducting the intervention anyway and is interested to learn about their impacts. These cheaper trials might not be as rigorous as their more expensive kin, but they can provide valuable learning that can inform policy and future research – without breaking the bank.

In this story

Michael Sanders

Director, School for Government

Julia Ellingwood

Research Fellow

Isobel Harrop

Research Assistant

Simon Bird

PhD Student