5 min read
17 Dec
17Dec

During the past 12 months I’ve been fortunate enough to present at a number of conferences up and down the country. One of the pleasures of being part of the conference circuit is that you get to listen to the presentations of the other speakers. The conference themes have been varied: Digital technology, a whole education, creativity, closing the gap to name a few. One strand that ran throughout most of them though was ‘Assessment without Levels’ and the approach schools have taken to deconstructing what works well in their schools and replacing it with something similar but slightly different.

As interesting and inspiring as the presentations were, I couldn’t help but come away feeling short-changed. That I’ve not quite got the gist of it or massively missed the point. I had to fight back the urge to ask what exactly has changed given that the school continues to assess the child, assign them a marker or proxy against which their performance and progress is judged and then put them into a banding of some sort. Sound familiar? What was even stranger was that assessment worked really well in their school and so I couldn’t understand why they would want to change it so drastically.

Many schools have invested a considerable amount of time and money in trying to re-invent the wheel. As a multi-academy trust, we are doing just that. What we learned right from the start though was that whatever you called it and no matter how it was packaged, it always came down to the same question: ‘What is that in old money?’ So we found that although it might look and feel different, at its core it was essentially the same. As a process it was valuable, and I suppose if nothing else it will get teachers in England reviewing what’s right for their own school.

Let me be clear. I understand entirely the notion behind doing away with levels. I am aware of the reasons why schools shouldn’t assign levels and sub-levels every day, week and term because they don’t relate to the ‘totality of achievement’ rationale behind end of key stage assessment.  My concern though from talking to frustrated teachers at the conferences is that the new systems they are trying to replace are no different from the ones they currently have, levels and all. The general consensus seems to be that as long as you don’t call them ‘levels’ and that pupils are put into some sort of banding you’ll be okay. Teachers are therefore attempting to solve the conundrum through a number of creative and multi-faceted ways, in most cases trying to turn the subject on its head. But no matter how many ways you try to flip the term ‘level’, it reads exactly the same backwards as it does forward. A ‘Level’ is a ‘leveL’. (Unless of course it’s a ‘veell’ or an ‘elvel’.)

I actually quite like the term Elvel© and am tempted to embark on a year-long research project to make the Elvel© the national primary assessment model of choice. I’d be good at it as I’ve had loads of practice. I’ve spent the best part of 20 years working tirelessly with colleagues in schools in Liverpool, London and Birmingham trying to come up with an accurate and consistent approach to assessing children. I remember the heady days of ‘agreement trialling’ in the 90s when each national curriculum subject had dozens of attainment targets. We then photocopied samples of children’s work that represented the top, middle and bottom end of each band (or level). These bandings were helpful in informing us whether children were making progress and how they compared with other children. They made for useful checkpoints so that we knew whether the child was on track to meet the end of key stage level descriptor. We worked with schools down the road and in the neighbouring borough and before you knew it we had the best system in the world because every teacher throughout the country knew exactly what a Level 3a looked like (or a3 as I know like to call it).  Simple but effective. It was fallible and open to misinterpretation but then that’s the very nature of assessment and exactly why we need national consensus on what ‘expected’ looks like. Let me assure you that whatever systems schools are coming up with at the moment, they will be equally as flawed.

Irrespective of the system of assessment we use, it will always be open to inconsistencies. In the very best schools, assessment is not an issue. Levels (or whatever term you choose to use) help us to know where our children are at so that we can plan where to go next. Children know them. Parents know them. Staff know them. In the best schools, staff establish an assessment culture that is supportive and helpful to the child so that they know what they can do well, what they cannot do well, and what they need to do next to improve and how to get there. Assessment provides useful waymarkers as we prepare children for their next stage.  As is often cited, assessment must be the servant and not the master.

Regardless of where we were in the country, as teachers and inspectors we knew what the threeness of a 3a felt like. We knew if a child was making good or not so good progress. When I go to see a GP in Hartlepool and am told that my blood pressure is 140/90 (high), I don’t want to then go to Totnes and be told that on the scale they use it’s low and that there’s nothing to worry about (not that I’d do that anyway, but you get my point). We need a standardised approach to measuring our health as we do for measuring children’s progress. In the same way that blood pressure levels take into account individual circumstances (height, age, weight etc.), so too should assessment. This is why over the past two decades we’ve created one of the most comprehensive national databases on pupil performance that allows us to compare how well children are progressing.

I recall at one conference discussing the matter with a senior HMI. Like me, he had reservations about where this will end up, particularly in the weaker schools or those that weren’t working in a network or collaboration. As CEO of a MAT I am concerned at how we will integrate new schools into the trust each coming with a different take on what expected progress looks like. Whatever system we come up with, it will invariably involve assigning a score to a child, no matter what you call it – an Elvel©, grade, level, proxy, target, 3b, 28.5, orange etc. This in itself is not a bad thing. Where I have grave reservations is our ability to then compare orange with orange. This was another area of concern raised by HMI during our coffee-break discussion.

Whatever system of assessment we have in place, surely there needs to be national consensus? How can there be though if each school has its own system for measuring progress?  Imagine if the government decided to abandon the UK railroad gauge system for measuring the width of railway tracks so that a train can travel from one end of the country to the other. If we did, it would be chaos. Asking each region, cluster or group of schools to come up with their own criteria for assessing progress is akin to asking each county to agree their own gauge-width in the hope that when we connect up all the tracks they’ll fit. Believe it or not, a number of countries do not have a standardised approach to the width of their railways. I hope our approach to assessment doesn’t go the same way.

So without wishing to sound too polemic, here’s a challenge for the New Year: Convince me I’m wrong. Convince me that the current system does not work and that a new system is better. Not different, comparable or as good as, but genuinely better. Because until we do, I’m staying put and will sit this one out. Happy Christmas everyone.

Comments
* The email will not be published on the website.