Research School Network: Does it really work? Judging the Impact of Interventions Louise Quinn writes about the importance of rigour and honesty, when evaluating interventions.

Blog


Does it really work? Judging the Impact of Interventions

Louise Quinn writes about the importance of rigour and honesty, when evaluating interventions.

by Shotton Hall Research School
on the

Robust evaluation of the interventions we implement in school matters. Without this rigour, we can’t be sure if they truly work’, or if they are good value for money. Done well, targeted interventions can have a real impact on pupil learning. However, despite good intentions, some interventions can have unintended negative effects. We need our interventions to work – or at least stand the best chance under the right conditions – because they often involve vulnerable pupils. And, if they don’t work, we need to stop doing them.

Quality evaluation also goes hand in hand with effective quality assurance processes. Robust quality assurance ensures that we know our interventions are being ran with fidelity along the way. For this to happen, we need clearly defined indicators of success and failure. In a nutshell, this requires us asking ourselves what will it look like when it’s working really well’ and, conversely, what will it look like when it isn’t?’

These indicators are seldom generic – they require leaders to have a clear theory about how the intervention works, as well as any unintended consequences. What’s more, they require a real understanding of the intervention’s sequence of activities so we can identify appropriate evaluative milestones. Beyond anything else, this gives an intervention the best chance of success because it means leaders can intervene early if we know that things aren’t quite right, rather than waiting until the end, scratching our head and wondering why it didn’t work. Whereas evaluation weighs the pig, great quality assurance feeds it.

Just over two years ago, we developed a one-to-one reading diagnostic assessment to precisely identify the needs of our pupils. Our pupils’ overwhelming need was reading fluency, and scale of the challenge was significant. We couldn’t use staff; we just didn’t have enough. So, we had to think creatively. Fast-forward a year – and a grant – later, we designed a peer tutoring programme involving pupils reading aloud with a trained tutor from carefully curated non-fiction anthologies. We took the resourcing of the programme seriously, but we took its evaluation and quality assurance just as seriously.

We’re now in our second year of evaluating this intervention at scale across multiple schools using a series of mini RCTs, otherwise known as randomised controlled trials. In short, RCTs work by dividing a population in need of an intervention into two or more groups by random lot. One group receives the intervention being tested (the intervention group), the other (the control group) receives a different one. Then, we measure the outcome for each group. For us, the intervention group receives peer tutoring during registration, whereas the control group receives their usual registration reading programme. We have two pre-defined outcome measures: a standardised reading test and a correct words per minute count.

Over time, each trial will be aggregated into a cumulative meta-analysis so we can ascertain if it actually works. And the if’ matters here because we make so many assumptions around interventions working, especially when we’ve designed them ourselves. This cognitive bias – known as the Ikea Effect’ – makes sense. We also have to be mindful of assumptions it will work. Such assumptions can be dangerous. Neither desire, nor assumptions, equal impact. That’s why evaluation matters – and we need to be open to learning that our interventions might not work, even if we’ve poured everything into them. Rossi’s Iron Law of Evaluation’ (1987) offers some practical, if disheartening, wisdom here: assume our interventions won’t work and don’t let unconscious bias creep in.

So, what lessons have we learnt after last year’s evaluation? Whilst the programme shows early signs of promise, we’ve learned that, unsurprisingly, the tutor is the intervention. The tutees who made the most progress had the best tutors. Using the test, learn, adapt’ approach advocated by the Behavioural Insights Team (2012), we’ve fed that learning into the programme and thought hard about tutor development. 2024 – 2025 is the year of the tutor’ and they are the main focus of our quality assurance.

And the biggest lesson of all? Robust evaluation matters. Even though well intentioned, not every intervention works. Knowing what not to do is of equal importance.

More from the Shotton Hall Research School

Show all news

This website collects a number of cookies from its users for improving your overall experience of the site.Read more