Research School Network: “Yes, I’m pretty sure it worked.’


“Yes, I’m pretty sure it worked.’

by Huntington Research School
on the

Yes, I’m pretty sure it worked as most of them did get better”.

How many times have we as teachers and school leaders been asked about the impact of something in our classroom or school and answered like this? Has anyone actually ever challenged you further by asking how do you know? Or, are you sure it was the result of what you changed or did differently? Probably not many I would guess…

This lack of thorough evaluation in the school system was illustrated to me in an SLT meeting a number of years ago here at Huntington. We were discussing our plans for the support of Year 11 students through the Spring term in light of the recent mock results. John Tomsett (Headteacher) asked our Subject Lead in Maths if he thought we should continue the 1:1 support for a number of targeted students to improve their performance in Maths GCSE. The reply was quite positive in that the SL Maths knew that quite a high proportion of the students involved had gone on to achieve a higher grade and so he said he thought we should continue. However, he was clearly a little uncomfortable with this as a conclusion and then very honestly (and bravely in my opinion) said to be honest John we really don’t know”.

For me, this was a pivotal moment in the school’s development with evidence-based practice. We couldn’t honestly say that we knew that it was the 1:1 intervention that had made the difference. As with many of these interventions they are very well meaning and thought out, we had put money into it as a school, the 1:1s were led by maths specialists and were targeted at the topics the students had struggled to grasp. So we really believed and wanted it to have impact.

In theory, students being in school should mean that they get better anyway, and their class teachers were also facilitating targeted revision in class time and hopefully the students themselves were doing at least some revision out of lessons too. Even if the 1:1 had impact, how much impact did it have compared to the other activities?

When we returned to the conversation a few weeks later, the SL Maths had a new proposal. He explained to us that he had selected all the students he wanted to target but that he was going to start earlier than usual and run two periods of intervention with only half of the students receiving the intervention in the first round. He would be giving all the students a pre-test before round one began, there would be another test at the end of round one where he would compare the progress of those receiving the intervention (the treatment group) and those that didn’t (the control group).

If the treatment group seemed to show increased progress then he would swap the students round and deliver the 1:1s to the control group as well. If it didn’t seem to seem to be showing any greater impact then we would have to re-evaluate the time, money and resources involved and decide if it needed further testing or if we stopped doing it and targeted the resources somewhere else.

This is just one model you could use in school to more accurately assess the outcome of an intervention. In part 2 of this blog, in next month’s newsletter, I will explore a number of other aspects of evaluation and further evaluative tools, including the need to plan evaluation right from the start.

More from the Huntington Research School

Show all news

This website collects a number of cookies from its users for improving your overall experience of the site.Read more