This website collects a number of cookies from its users for improving your overall experience of the site.

Research School Network: Key Pillars of Assessment What does effective assessment practice really look like?

Blog


Key Pillars of Assessment

What does effective assessment practice really look like?

by Research Schools Network
on the

Following the remote period of lockdown learning, the return of classroom teaching has certainly brought a wave of reflection for school staff across the nation. For some, this wave might be more pertinently described as a tsunami, as the amount of questions increases to an overwhelming point. Ranging from reflective questions about remote teaching to evaluative questions about the impact of lockdown on mental health, schools are searching for the answers that will enable them to make the best possible decisions for their students. But how? With such variance in student attainment, well-being, and engagement after lockdown, it is unsurprising to see an equal level of variance in how schools are responding.

In addition, the ever-shifting alternative for GCSEs has only put more pressure on schools to identify their most disadvantaged, whether academically or emotionally. The media have unhelpfully perpetuated an atmosphere of panic and dread, with lots of language to imply that the lockdown has caused irreparable damage. Despite the detrimental influence of the national press, our language needs to have more positivity, focussing on what is in our control: improving student outcomes. The need to diagnose and identify priority areas for learning has only become more relevant than ever before, if not for allowing students the opportunity to provide their best evidence forward’.

More questions. What does best evidence’ actually mean? How do we get it? How can we support students to produce it? It seems like there are obvious answers to this series of questions, but think harder. At the best of times, achieving consistency within assessment practice is difficult, if not impossible, as an individual or as an entire nation. As Prof. Rob Coe quite aptly put, assessment is one of those things that you think you know what it is until you start to really think hard about it’. With this in mind, how are we able to ascertain which students deserve what grade when the purpose of mock exams or class assessments suddenly transforms? Previously, the purpose of in-school assessments were to inform leaders and teachers about student performance, with claims focussed on targeted improvements before their official GCSE exams.

Now, the purpose has mutated. The rush for evidence has revealed a mountain of inconsistencies and discrepancies within our assessment practice, not only for individual teachers looking to assign a mark for deserving students, but also for entire cohorts who are looking to standardise their assessment scores.

Even though grades are usually assigned to subjects after the national examinations have been completed and analysed, schools are now having to assign grades without the bigger picture. Therefore, to what extent are we able to accurately depict that a student deserves a certain grade, if grade boundaries change in light of the exam’s difficulty and students’ performances that year? We cannot.

But what choice do we have? This year, we must accept that a process has been decided. Yet, this does not mean that we cannot continue to improve our assessment practice. There are three key areas that your school might focus on, not just this year, but in the future.


1. Purpose

The purpose of each assessment can vary greatly. Historically, assessments tend to focus on student performance; strictly speaking, they look to ascertain the quality of knowledge application in response to an unseen task. But now, assessment practice has been altered, not only through the removal of unseen elements, but also the methods in which teachers are applying grades to individual pieces of work, often without moderation or a national perspective. Therefore, it is imperative to consider:

What concepts from the subject-curriculum is the assessment focussed on?
What is the teacher going to do with the assessment information?
What methods of feedback will the teacher use?

Identifying the purpose of each assessment, and what it will be used for, is essential.


2. Validity

An assessment is not inherently valid or invalid; it is about the validity of subsequent claims. For example, in English, a teacher cannot make a valid claim about a student’s understanding of Macbeth based on one extract question about a specific character. In Science, a teacher cannot make a valid claim about a class’ understanding of electric circuits based on a question to label a diagram. It is important to consider the validity of the claims based on the purpose of the assessment. Consider the following questions:

What valid claims can be made following an assessment?
What claims might be invalid?
What will be used to translate the performance into data?

It is crucial to evaluate the validity of claims, and whether the data truly reflects student performance.


3. Value

Assigning a value to assessment is the balance between time-taken and information-quality. If an assessment’s aim is to measure an entire unit of teaching and learning, then how long will this take to create, to administer, and to mark? Alternatively, how much value is in an assessment that consists of one question, taking two minutes to complete? The value of an assessment is about the goldilocks level’ of efficiency and effectivity. With this in mind, consider these questions:

What is the time-scale of each assessment from creation to feedback?
What is the quality of information gained from each assessment?
What is the impact of the assessment on student learning?

It may be useful to prioritise information quality initially, with frequent evaluation to improve efficiency.

Overall, even though assessment practice might differ this year in the wake of changes to national examinations, there are still underlying foundations: what is the purpose of each assessment, and what will the information be used for? What is the validity of each claim, and to what extent can the performance be translated into data? What is the value
of the information, and how long did it take to acquire?

To conclude, developing assessment practice might begin with the staff. This might include targeted CPD on the purpose of assessment, raising whole-school interest and curiosity, and ultimately, providing opportunities for professional dialogue. Next, consider the provision of time to allow departments to assess the validity of their assessment claims and how performance information is translated. Could a rank order of performances based on comparative judgement prove to be more useful than assigning grades based on raw scores? Finally, finalise how departments are going to evidence the developing value of their assessments. Not only is time a precious resource to monitor, but also the evaluation of information quality.

Back to the first point, if schools are searching for the answers that will enable rapid improvements, then they must endeavour to acquire the best possible information, enabling the best possible decisions, leading to the best possible outcomes.

More from the Research Schools Network

Show all news