11 Jan 2019

It's like baking a cake... hopefully!

Evaluation: Impact and Process

It's like baking a cake... hopefully!

As we prepare to welcome Professor Stuart Kime back to Norwich to deliver his popular one-day Evaluation: Impact and Progress training, Ali Banks tells us what she learned when she attended the course.

I’ve recently been appointed to the role of Teaching School Lead at Notre Dame High School, and it’s easy to feel overwhelmed when faced with conducting new projects and having responsibility for reporting on them. So the course Evaluation: Impact and Process with Professor Stuart Kime from Evidence Based Education was perfectly timed. A group of us were taken through the processes of implementation and evaluation on a one-day course at the start of September. All we had in common was that we were interested in, or already carrying out, evidence-based interventions in our different schools. By the end of the day, we were all sold on the importance of good implementation, and keen to apply what we’d learnt.

Here are the key points I took from the day:

1. If what you are doing is making a demonstrably positive impact on pupils, then there is no need to evaluate it. Keep doing it: you are making a difference. If you are not sure if it is making an impact then you need to evaluate it.
2. There are two types of evaluation, both of which are needed to inform your decisions: impact evaluation (what happened); and process evaluation (how it happened, aka what you did).
3. It’s ok if your intervention doesn’t have a positive outcome – it’s good to know either way. I learned a new word here: equipoise (the balance of someone carrying out the evaluation), which describes the ability to take an objective view of the intervention and determine if it has made a positive impact, a negative impact, or no impact at all.
4. Bias is inevitable, but there are ways we can reduce its impact, in order to produce meaningful data. If we are putting effort into trying something, we want it to work – so it is easy to become biased unintentionally. One key way to reduce bias is to use random allocation when creating your intervention group and control group.
5. Using a control group is ok! When Stuart put this to the group, people were initially unnerved by the idea of certain pupils ‘missing out’ on an intervention. However when asked ‘how often do you try something new or different with a single class’ we realised that we do this all the time. He also asked us to consider which is better: adoption of a method, rolled out across the whole school, in the hope it works (when it may not); or to roll it out with one class, actively evaluate if it is making a positive impact, and then if it is, to roll it out across the school?
6. Know before you start how you will measure success or failure. Know what method you will use, how and when it will be administered, and how outcomes will be scored. Importantly, if there is an effective evaluation method that you already use in school, use it! You wouldn’t change a really good cake recipe would you?
7. Always do a pre-mortem. Anything that can go wrong will go wrong. Have plans B through to Z and beyond, and if someone mentions a possible fatality in your initiative welcome it and put that into your plan.
8. Remember the limitations of the project (and of evidence based practice in general). Something that works in one school setting will not necessary work in yours, so always be aware of your context. Even a perfect cake recipe can fail: the oven could be at the wrong setting, or you left the batter out too long before you put it in the oven. If you can unpick why it didn’t work and what you have learned from it then you will know for the future.

Stuart made what could have been a very dry and complex subject both interesting and accessible, and I came away feeling that a difficult job now seemed achievable. This is no mean feat. I shall now go forth and bake my cake!

If you would like to attend this course in February, you can book here.