Research School Network: The Trouble with Giant Ants and the Pitfalls of Scaling Up Evidence Why successful programs don’t always work at scale


The Trouble with Giant Ants and the Pitfalls of Scaling Up Evidence

Why successful programs don’t always work at scale

In the 1954 film Them’, the world is plagued by giant irradiated ants. In case you were worried that this could ever happen, fear not! An ant that reached this size could not survive for very long. Ants breathe’ through spiracles, and as the size of an ant increases, these spiracles do not have enough surface area to take in enough oxygen. At a much larger scale, the weight on an ant’s legs would increase beyond its ability to bear the burden too. We hope that’s reassuring.

When it comes to scaling up evidence, there can be similar problems. In this post, we look at some of the problems when increasing the scope of something which has been seen to work on a smaller scale. We refer to evaluation reports from the EEF of scaled up projects that had showed promise in efficacy trials.

People
At scale, we encounter a problem in that some interventions rely on the expertise of those who first delivered it or conceived it. In bringing in more people, it can be difficult to ensure that everyone has the same understanding and approach.

The IPEELL strategy aims to help pupils plan, draft, write and revise. An efficacy trial in 2013/14 showed a large improvement in writing scores – a massive +9 months progress. In November 2018, an evaluation report was published looking at a roll-out of a scaled up version of the project. While this did still show some promise, it was not seen to be as effective. One possible explanation of this, was in the training of staff who would have to deliver the intervention.

In the scaling up there was a need to train more people to deliver, so a train the trainer’ model was adopted, a change from the original efficacy trial. The evaluation states: we recommend that having an experienced trainer who has seen the intervention be delivered and can share practical examples with the teachers from experience would be beneficial in the future roll-out of the cascading training model. It would also be useful for the trainers to watch an experienced trainer train the teachers first.

If you are planning on developing something to work at scale, whether it be across a school, a local authority or a MAT, we recommend spending time designing the training to ensure the delivery is as intended.

Fidelity
Following a successful efficacy trial of Catch Up Literacy, where the EEF noted an effect size of +3 months, the scaled up version showed less promise. One of the conclusions of the evaluation report was as follows: The intervention was not always delivered as intended. Some schools struggled to resource two one-to-one sessions per week, while in other schools TAs adapted how they delivered individual sessions from what they were taught in the training.

In the EEF’s A School’s Guide to Implementation’, they mention active ingredients’, the set of well-specified features that are crucial to the fidelity of an intervention: When preparing for implementation, try and distil the essential elements of the programme or practice, share them widely, and agree them as non-negotiable components that are applied consistently across the school. For example, if the intervention is focused on developing classroom teaching, capture the key pedagogical strategies and behaviours that will reflect its use. There may be some key underlying principles that you also want to specify and share.

Another promising project’ was Grammar for Writing. In the scaled up version, The fidelity assessment focusing on two of the core principles of the programme pupil discussion surrounding decisions and choice-making’ and connections made between grammar and effect’ suggests that these approaches were compromised during the programme delivery.

Some local adaptation is necessary and often beneficial when scaling up, but if we lose sight of the active ingredients, then there is a likelihood that things will not work as intended. (Here is our blog post on active ingredients.)

We have concentrated on things that changed when delivering an intervention at scale, but each of the three evaluations mentioned above started off with an efficacy trial under best conditions and we would recommend reading these before rejecting any of the interventions:

Catch Up Literacy

Grammar for Writing

IPEELL: Using Self-Regulation to Improve Writing

If you are a school leader, consider how best to use evidence to improve your practice, you could sign up to our Leading Learning course. Details here.

More from the Bradford Research School

Show all news

This website collects a number of cookies from its users for improving your overall experience of the site.Read more