Research School Network: Insights into assessment from ​‘What Does This Look Like in the Classroom’


Insights into assessment from ​‘What Does This Look Like in the Classroom’

What Does This Look Like In The classroom?: Bridging the Gap Between Research and Practice by Carl Hendrick and Robin Macpherson is a must read for any teacher wanting to become more research-informed. With chapters on topics like literacy, behaviour, motivation and technology, the book features a series of interviews with notable education researchers, theorists and practitioners. Particularly fascinating is the chapter on assessment, marking and feedback which presents the thoughts of Dylan Wiliam and Daisy Christodoulou, both doyens of assessment theory and research.

Highlights from Dylan Wiliam:

Make feedback into detective work.”

Effective feedback causes students to respond to and think about the task. This way they are more likely to take the feedback on board – which is why it is not helpful to simply present the feedback and then move on. This should happen in lesson time so that the teacher can provide support. Tasks might include:

  • You have made 4 apostrophe errors. Find them and fix them.’
  • One of your three answers is wrong. Work out which one.’
  • I have highlighted your mistakes. Work out why they are mistakes.’

…the major purpose of feedback should be to improve the student.”

Often feedback is directed at the work rather than the underpinning learning. Ultimately, the goal of feedback is to ensure that students do a better job next time they are doing a similar task’. If an improvement task involves improving or embellishing a piece of work, it will only be effective feedback if it also provides the child with some genuine durable learning to take into the future.

Just one sentence explaining to students why feedback was being given made a huge difference to their achievement.”

Drawing on research from Yeager and his colleagues, Wiliam stresses the importance of explaining the reason for giving feedback, especially when it could be perceived as being critical. Teachers should:

  • make students aware of their high standards;
  • make students aware that they themselves can achieve these high standards;
  • call attention to the fact that the student is making improvement as a result of feedback.

I once estimated that, if you price teachers’ time appropriately, in England we spend about two and a half billion pounds a year on feedback and it has almost no effect on student achievement.”

While feedback can have a transformative effect, this is not true of all feedback. In fact, some studies suggest that certain types of feedback can have a significant negative effect on learning. There is also scant evidence to support the effectiveness of the quantity of written feedback that many teachers give to students – see this EEF report for more. School leaders need to think carefully about the marking and assessment policies they put in place and the effectiveness of these should be carefully evaluated.

I think teachers should be spending twice as much time planning teaching as they do marking.”

This is a useful rule-of-thumb for schools. It is quite possible that in some schools the opportunity cost created by the amount of time spent marking is impeding, rather than accelerating, learning. Perhaps school leaders should be asking the questions:

  • Do you know the proportion of time teachers at your school dedicate to marking?
  • If not, how can you find out?
  • If the proportion is too high, what can you do about it?

When you find out that something you thought was correct is in fact incorrect, the more confident you are that you are correct, the bigger the improvement in the change to your thinking.”

The hypercorrection effect is a fascinating psychological quirk. Regular retrieval practice through quizzing and testing is known to boost learning. If students are encouraged to mark their own tests (directed by a teacher, of course), the hypercorrection effect can make these even more effective.

The research that Paul Black and I reviewed suggests that the biggest improvements in student learning happen when teachers use assessment minute-by-minute and day-by-day as part of regular teaching, not as part of a monitoring process.”

Ultimately, the evidence for effective formative assessment shows time and time again that the most powerful form of assessment data is that which can be acted upon to improve learning. Knowing you are a 4 in maths and you need to get a 5 is about as useful as knowing that the French speak French but not knowing a single word of the language. Assessment is more effective when driven by the learning needs of the students, rather than the needs of a school data gathering system.

Highlights from Daisy Christodoulou:

… I think that value-added is a really powerful tool. I think it’s probably most helpful to look at it at the cohort level and not at the individual level …”

All test data contains some degree of error which means that, inevitably, baseline data for some students will be inaccurate. We must be careful about the assumptions we make about an individual student’s progress based on this data. Yes, a student might be genuinely underachieving or overachieving – but in some cases this will be caused by anomalous data. However at the cohort level, such inconsistencies are usually cancelled out and we can make better inferences about the extent to which we are adding value. Once again, this calls for school leaders to adopt a nuanced approach to assessment data.

… the role of the teacher in curriculum designing and the textbook should be to break down what people need to do in the exam and to teach those component parts instead, rather than always focusing on their past papers and on the questions.”

The problem with using past papers too early is that they are designed as assessment tools rather than teaching tools. How students learn is not always analogous to how they are assessed. For instance, the English language GCSE assesses a huge domain of implicit knowledge – including features of written genre, structure and language. These features need to be identified and taught in discrete component chunks, possibly over a number of years. Christodoulou argues that introducing the exam too early is counterproductive as it is likely to lead to teachers overlooking the depth and breadth of knowledge needed to master the subject. While exam practice is crucial, it should not be to the detriment of deep, wide-ranging and finely-tuned subject knowledge and skill.

… the purpose of a summative exam is not to be diagnostic.”

Summative GCSE exams are not designed as diagnostic tools. For example, if a student performs well on a GCSE English literature mock exam question on the character of Mr Birling from J.B. Priestley’s An Inspector Calls, we can only infer that this student knows a lot about Mr Birling. It does not show us how they would have written about another character – Mrs Birling, Inspector Goole or Gerald Croft for instance. In other words, the exam samples the domain, but does not give us all the information we need. Other forms of assessment, such as in-lesson quizzes, short answer questions and essay plans might have given us a more accurate picture of the breadth and depth of this student’s learning. There is a need, therefore, for schools to be more creative in the development of faster, more efficient and more precise formative assessments.

Further reading:

Embedded Formative Assessment, Dylan Wiliam

Making Good Progress? The Future of Assessment for Learning, Daisy Christodoulou

More from the Durrington Research School

Show all news

This website collects a number of cookies from its users for improving your overall experience of the site.Read more