Some clarity on KS2 Writing moderation … but not a lot

Not for the first time, the Department has decided to issue some clarification about the writing assessment framework at Key Stage 2 (and its moderation!). For some inexplicable reason, rather than sharing this clarity in writing, it has been produced as a slowly-worded video – as if it were us that were stupid!

Here’s my take on what it says:

Some Clarity – especially on punctuation

  • For Greater Depth, the long-winded bullet point about shifts in formality has to be seen in several pieces of work, with more than one shift within each of those pieces.
  • For Expected Standard, it is acceptable to have evidence of colons and semi-colons for introducing, and within, lists (i.e. not between clauses)
  • For Expected Standard, any of either brackets, dashes or commas are acceptable to show parenthesis. There is no need to show all three.
  • Bullet points are punctuation, but the DfE is pretending they’re not, so there’s no need to have evidence of them as part of the “full range” of punctuation needed for Greater Depth.
  • Three full stops to mark ellipsis are also punctuation, but again, the DfE has managed to redefine ellipsis in such a way that they’re not… so again, not needed for Greater Depth.

A bit of guidance on spelling

This was quite clear: if a teacher indicates that a spelling needs correcting by writing a comment in the margin on the relevant line, then the correction of that spelling cannot be counted as independent. If the comment to correct spellings comes at the end of a paragraph or whole piece, without specifying what to correct, then it can still count as independent.

No clarity whatsoever on ‘independence’

Believe me, I’ve re-watched this several times – and not all of them at double-speed – and I’m still bemused that they think this clarifies things. The whole debacle is still reliant on phrases like “over-scaffolding” and “over-detailed”. Of course, if things are over-detailed then there is too much detail. What isn’t any clearer is how much detail is too much detail. The video tells us that:

“success criteria would be considered over-detailed where the advice given directly shapes what pupils write by directing them to include specific words or phrases”

So we know specifying particular words is too much, but is it okay to use success criteria which include:

  • Use a varied range of sentence structures

Is it too specific to include this?

  • Use a varied range of sentence openers

What about…?

  • Use adverbs as sentence openers

There’s a wide gulf between the three examples above. Which of these is acceptable? Because if it’s the latter, then schools relying on the first will find themselves under-valuing work – and vice versa, of course. That’s before you even begin to consider the impossibility of telling what success criteria and other supporting examples are available in classrooms at the time of writing.

The video tries to help by adding:

“success criteria must not specifically direct pupils as to what to include or where to include something in their writing”

But all of those examples are telling children what to include – that’s the whole point of success criteria.

If I’ve understood correctly, I think all three of those examples are acceptable. But it shouldn’t matter what I think: if the whole system depends on what each of us thinks the guidance means, then the consistency necessary for fair and useful assessment is non-existent.

The whole issue remains a farce. Doubtless this year Writing results will rise, probably pushing them even higher above the results for the externally tested subjects. Doubtless results will vary widely across the country, with little or no relationship to success in the tested subjects. And doubtless moderation will be a haphazard affair with professionals doing their best to work within an incomprehensible framework.

And to think that people will lose their jobs over data that results from this nonsense!

The full video in all its 11-minute glory can be found at:


Teaching is complex – and that’s okay.

As another list of non-negotiables does the rounds, I find myself again in disagreement with those who would argue that a minimum baseline of expectations is a helpful or necessary thing. Unfortunately, like so many things in life, I don’t think we can distil what is a very complex operation down to few simple ‘must-dos’. Not least because as with all learning, teachers will be at very different stages of their expertise, and one size rarely fits all.

The problem with simple tick-list approaches is that teaching isn’t simple. It’s tempting to say that all lessons should begin with the Learning Objective being shared, but then we can all think of examples where that would ruin the wider structure of the lesson. It’s tempting to say that teacher talk should be minimised, but too often I’ve seen lessons where teachers, worried about time spent on the carpet, rush children off to a task they’re ill-equipped to tackle. It’s tempting to say that every lesson must include differentiated tasks, but then many of us have seen lessons where children are given work that is below their capability simply to show differentiation. Teaching is complex.

Some of the things that make for really excellent teaching are exactly the sort of thing you can’t tick off a list. I think that knowing your children is a key to great teaching and learning. Yes, some inspiring lectures can achieve great things without interaction of any sort, but for the most part, I know that I can teach my own class more effectively than I can an unknown group. But there would be no point in setting out a policy in my school that says you must know your children; that isn’t something you can tick off.

Equally, some of those things that seem straightforward, conceal a whole level of complexity that doesn’t feature on the tick-list. We know that feedback can be highly effective in further children’s learning, but that could come in the form of written marking, or comment in the lessons, or in the way the teacher reacts to off-the-cuff assessments from whiteboard activities. So we could add “You must give feedback” to a tick-list, but what does it mean?

The same is true of sharing Learning Objectives. Making children aware of what you intend them to learn is no bad thing. But what if you’ve picked the wrong thing at the wrong time? What if it doesn’t match the wider sequence? What if the task you’ve planned doesn’t really meet the needs of the learners, or the aims of the lesson? What if it’s something they already know? Sharing a Learning Objective is only going to be of any use if the objective is apposite and taught well.

One argument people make is that schools in difficult circumstances may require basic thresholds. Special Measures is maybe an excuse for such approaches. But in my experience, like in any class, in any school in a category you will find a wide range of ability among the teachers. For those who are teaching brilliantly against the tide, reducing their craft to a mere tick-list may only serve to stifle their brilliance. Equally, for those who are genuinely finding teaching a complex challenge and failing to serve their children well, insisting on a list of gimmicks will not improve practice.

I have seen plenty of lessons – indeed, I’ve probably taught plenty – where a Learning Objective is shared, tasks are differentiated, children are engaged with active learning, peer-evaluation takes place, mini-plenaries are dotted about life confetti… and the net effect on learning is negligible. Equally, I know that some lessons might do none of those things and  be just right for that group at that time.

If we really want to improve teaching and learning, no matter what the current standard, then we need meaty discussion about what we mean by that. For teachers who are struggling, they need to see good teaching in practice, preferably narrated by someone who can highlight its strengths; they need support to change their thinking.

For a teacher who really needs to improve their practice in the classroom, the damage a tick-list approach can cause is substantial. What if that teachers does everything that is demanded of him: his displays are beautiful, learning objectives shared, children think-pair-share, tasks are differentiated… and yet still, the lesson is poorly-taught or the progress is limited by lack of the required prior knowledge. How demoralising for that person to have spent hours refining exactly what you’ve asked, only to be told they’re still failing. Indeed, imagine the difficulty of trying to manage procedures for a teacher who is clearly ineffective, but is good at ticking the boxes you’ve set out!

Teaching is complex.

That’s not to say we shouldn’t try to articulate it. At my own school we have had time dedicated this year to discussing what we think ‘highly effective teaching’ looks like. We’ve discussed learning objectives, and differentiation, and feedback. But we’ve done so in a professional arena where we can unpick what we mean by those terms. We couldn’t reduce it to a simple tick-list, but we recognise some key areas we recognise are important factors.

If a school genuinely has some very weak teachers, then those teachers need specific advice, coaching and support to improve. Good teaching can no more be reduced to a simple tick-list than can good Year 6 writing… and look where that’s got us!

On joining the Chartered College of Teaching

After overcoming a few stumbling blocks, I’ve finally joined the Chartered College of Teaching. I say finally not because of the few days’ delay (my bank apparently thought my signing up might have been a fraudulent use of my card. Do they know me at all!?), but because it strikes me that this is something that’s long overdue.

I’ve always been a member of a teaching union – aren’t we all? – but like so many teachers, that was in part for the protection offered. Unions are there to protect and improve pay and conditions; while they may dress their arguments up in pedagogical terms, the bottom line is the same. And that’s all well and good: that’s their job.

But that conflict also makes it very easy for the government to dismiss what teachers say through their unions – not least the more militant groups with their outlandish demands at conferences. The profession more than ever needs a clear conduit for its opinions and expertise.

But a professional body has to cut both ways. As well as conveying views from the profession to the wider world – from parents to the DfE and Ofsted – it must also offer something to members. I’m pleased to see that the College will provide members with access to educational research, but perhaps more importantly I look forward to a useful professional journal that will help do the job of disseminating that research in ways that can have an impact in classrooms. We’re a time-poor profession as it is, and few of us have time to wade through academic journals on a regular basis;  an intelligent chartered college can be the medium through which teachers receive the very best of information on good practice – and also the very clearest of evidence to dispel the nonsense of the likes of Brain Gym and Learning Styles.

The key thing at this stage is to get people participating. If the college appears not to be the finished article, I’m hoping it’s because it isn’t. I hope, too, that that means teacher members will shape it.

So let me offer a few requests for Dame Alison Peacock and her team as she leads the College in its formative stages:

  • We need you to be brave, Dame Alison, on our behalf. Sometimes that will mean speaking truth to power; asking the difficult questions; putting politicians straight – saying the things we’re all thinking!
  • Focus on the classroom teachers more than the leaders. One of the toughest parts of the job is the solitude of the classroom. The College can be a way for teachers to get a sense of what is happening in other classrooms.
  • Remember the people that so many other organisations forget: the Early Years experts, the SEN schools, the sixth-form colleges, supply teachers, middle schools!
  • Put research and evidence at the heart of work to guide us and others, and be honest when the research doesn’t tell us enough to know.
  • Reach out across the profession, whatever teachers’ experience, across sectors, through the age ranges, the breadth of the country and those who aren’t yet convinced about the College: we’re stronger together.
  • (If truth be told, I’m not taken by the logo, but… maybe it’ll grow on me?)

If you think I’m right – or you think I’m wrong – perhaps you should put your own views across. Join the College at the start.

The Chartered College is currently signing up founder members, who must be teachers in schools, Early Years or post-16 settings:


National Curriculum Test videos

I’ve updated the videos I made last year to explain the KS1 and KS2 tests to parents. As there is an option about using the Grammar, Punctuation & Spelling tests in primary schools, there are now two versions of the video for KS1 (one with, one without the GPS tests).

Please feel free to use these videos on your school’s website or social media channels, or in parent meetings, etc. There are MP4 versions available to download.

Key Stage 2

Re-tweetable version:

Facebook shareable version:

Downloadable MP4 file:

Key Stage 1 – version that includes the GPS tests

Re-tweetable version:

Facebook shareable version:

Downloadable MP4 file:

Key Stage 1 – version for schools not using the GPS tests

Re-tweetable version:

Facebook shareable version:

Downloadable MP4 file:

On Knowledge Organisers

When Jon Brunskill recently agreed to share his work on Knowledge Organisers in primary school, I was excited to see what he came up with. I wasn’t disappointed, and I’m sure many others have been looking with interest. I think there’s a lot of merit in the model, but inevitably I think there is some refining to do.

I say this not as an expert – far from it, I’ve cobbled together one Knowledge Organiser in my life and remain unhappy with it. However, having spoken briefly to Jon about his, I think we both agree that there is merit in unpicking the model further.

Firstly, with Jon’s permission, let me share an image of the organiser he shared (I highly recommend reading the accompanying blog before continuing further with mine!)

At first glance, it looks like a lot of content to learn. I think that’s partly because most of us have spent a good many years teaching broad ideas, and not expecting children to learn detail off by heart. I think there are also very few of us who could hand-on-heart say we know all this content to recall. But I think that represents the shift we need to make rather than something to fear.

That led me to question the purpose behind the Knowledge Organiser. I haven’t spent enough time thinking about them, and certainly not enough time using them, but when I have, I’ve usually considered it a vehicle for outlining the key information that I expect students to learn and retain for the longer term. Often over longer units of work these might include key ideas which are integral to later understanding, whether that’s later in the school year, or later in their education career.

By way of illustration of my thinking, let me share a knowledge organiser I constructed a couple of years ago for my Year 5/6 class


My first attempt at a Knowledge Organiser in 2015

The differences are quickly obvious. For a start, mine is clearly based on a wider period of teaching, and perhaps more indicative of a basic revision guide, rather than providing content in advance of a unit. I think perhaps that’s also its biggest downfall. It’s worth noting that it’s something I tried and didn’t come back to.

But I think there is maybe a useful middle ground. In Jon’s case, much of the content set out – particularly on the timeline – is content that is useful for the purposes of writing an information text about the event itself (a task which Jon plans to do in his Y2 class). However, I don’t think he expects those students to secure that detail in the very long term. Arguably, this brings the organiser perhaps closer to the cramming model of revision than the more successful spaced practice approach.

Ruth Smith posted a comment on Jon’s blog saying she could imagine the organiser being used as a prompt during writing. While I can see the merits, I do think that the risk then – as Jon would rightly say – is that we replace the value of knowledge with the reliance on someone/something else to do the work for you. That’s not the aim here.

It leaves me wondering what the function of a Knowledge Organiser should be. I’m not persuaded of the value of knowing the date of leaving quarantine after the lunar landing. That said, the value of learning the word ‘quarantine’ is something I think is highly valuable.

The question for me becomes one of later testing (and let me be honest, I’m only at the very beginning of this journey; don’t for a second presume that I’m an expert. I’m a way behind Jon on this!)  In a knowledge rich curriculum, I think one of the key functions of a Knowledge Organiser is to set out the key knowledge that I want students to retain and that I will test for.

We know of the great merit of spaced testing to aid learning, and it strikes me that a Knowledge Organiser should aim to set out that content which would likely later form part of such tests. In the context of Jon’s organiser, I could see merit in testing much of the vocabulary, the date of the landing, and perhaps the names of the crew. However, I’d also want to include some wider context – perhaps a bit more detail behind the Space Race, mention of JFK’s 1960 aim, etc. Might these replace some of the less significant dates of 1969?

Of course, we’re talking about 7-year-olds in Jon’s context. They will lack much of the wider historical knowledge to place events in context, and so there is a risk of expecting too much. But equally, if we train children that knowledge is to be learned, then ought we not be training them to learn it for the long term?

The content I think* I’d like to see on Knowledge Organisers is the detail that I would also expect to use in a brief pop quiz a week later, but also on a test mid-year drawing on prior units, and again at the end of the academic year, or in the first days of the following September. There is a risk that using Knowledge Organisers to aim for short-term recall of detail that is later lost, will develop a cramming ethos, rather than one of long-term storage of information.

What does this mean for Jon’s example? I’m not sure. Maybe a separation of the content that he expects children to retain in the long term from information which would be useful in this context? There is certainly some merit in having this timeline clear in the child’s mind as they are writing – not least because it helps to build a narrative, which is a great learning technique –  but is it necessary for it to be stored in long-term memory? Indeed, is a two-week unit even long enough for such a transfer to be made?

Yet there is unquestionably information here which would be re-used in future that would allow such a long-term retention.

More thinking to do… but well worth doing, I think.

*I say I think, because I am not entirely sure that I won’t think completely differently in six months time.

If you haven’t already, I again recommend reading Jon’s original post here.

17 Twitter Recommendations for 2017

It’s three years since I last wrote a list of recommendations for who to follow on Twitter, and since then some have stopped tweeting, some have been promoted, some have even skipped the country – and of course, many new twitter folk have arrived. So I thought it about time for an update. I’ll try to limit myself to just 17.

School Leaders

Stephen Tierney (@LeadingLearner) – when I first heard Stephen speak at a conference up north, I thought instantly that he’s the sort of Headteacher I’d like to work for. Everything I’ve read of his since has confirmed that view. (It helps that’s he’s executive HT of a cross-phase group of academies).

The Primary Head (@theprimaryhead) – another Head for whom I suspect it’s great to work – I presume he’s not anonymous in his own school.

John Tomsett (@johntomsett) – a secondary head, and a voice of calm in an otherwise tumultuous Twitter world.

Jill Berry (@jillberry102) – Jill is a former headteacher who now shares her knowledge about the challenge of the role, and keeps a good eye on other developments in education.

Primary Teachers

Rhoda Wilson (@TemplarWilson) – this is a bit of a cheat, as I’m also married to her, but I do very much follow her on Twitter, and then steal many of her excellent ideas about teaching primary English, including whole-class reading (and often pass them off as my own!)

Sinead Gaffney (@shinpad1) – a hugely knowledgeable expert in literacy, and my go-to person when I need a KS1 expert, even though she’s moved to work with the big kids now.

Jon Brunskill (@jon_brunskill) – the sort of Key Stage 1 teacher who dispels any myths about infant schooling being warm, fuzzy and directionless!

Rachel Rossiter (@rachelrossiter) – a SENCo, which makes her a great port of call for all such queries, but mainly a genius at use of pun – what more can you want from Twitter?

Other Knowledgeable Sorts

Education DataLab (@edudatalab) – data experts from FFT who quickly shed light on topical issues by looking at the data to find answers (including those which are not always welcomed by the DfE, I’m sure). Director @drbeckyallen is also worth a follow.

Jamie Pembroke (@jpembroke) – on the data theme, Jamie is my favourite sort of data expert, in that he recognises the many flaws and limitations of the stuff. His wisdom on sensible use of data is welcome in today’s climate.

Daisy Christodoulou (@daisychristo) – sometimes people refer to me as an expert on assessment; I’m far from it. Daisy is absolutely that: she has spent time thinking about assessment in depth in ways that have completely changed my thinking. Look out for her new book in the spring too.

David Didau (@LearningSpy) – after a brief spell of being banned from Twitter, it was a relief to have David back. A man who speaks confidently about what he understands of education – including honesty about when he’s got things wrong. We could all do with such a balance of knowledge and humility.

Sean Harford (@harfordsean) – few people have done so much to transform the damaged reputation of Ofsted, and Sean has done it largely by thinking and talking common sense. The more people who are following him, the more we can #HelpSean to  spread better messages to schools. It’s probably also worth following new HMCI @amanda_spielman.

Sam Freedman (@samfr) – a director at Teach First who has connections and insights at the highest levels of policy that are often insightful. Tends not to get involved in the nitty-gritty of classroom practice, but expert on how teachers can best get government to work for them!

Micon Metcalfe (@miconm) – the School Business Manager to beat all School Business Managers. Knows pretty much all there is to know about managing  a school, academy, chain or nation – and keeps a watchful eye on news on related matters too.

You can access an easier-to-follow-from full list of the 17 recommendations via my Twitter list:

The impossibility of Teacher Assessment

I’ve said for a fair while now that I’d like to see the end of statutory Teacher Assessment. It’s becoming a less unpopular thing to say, but I still don’t think it’s quite reached the point of popularity yet. But let me try, once again, to persuade you.

The current focus of my ire is the KS2 Writing assessment, partly because it’s the one I am most directly involved in (doing as a teacher, not designing the monstrosity!), and partly because it is the one with the highest stakes. But the issues are the same at KS1.

Firstly, let me be frank about this year’s KS2 Writing results: they’re nonsense! Almost to a man we all agreed last year that the expectations were too high; that the threshold was something closer to a Level 5 than a 4b; that the requirements for excessive grammatical features would lead to a negative impact on the quality of writing. And then somehow we ended up with 74% of children at the expected standard, more than in any other subject. It’s poppycock.

Some of that will be a result of intensive drilling, which won’t have improved writing that much. Some of it will be a result of a poor understanding of the frameworks, or accidental misuse of them. Some of it will be because of cheating. The real worry is that we hardly know which is which. And guidance released this year which is meant to make things clearer barely helps.

I carried out a poll over the last week asking people to consider various sets of success criteria and to decide whether they would be permitted under the new rules which state that


So we need to decide what constitutes “over-aiding” pupils. At either end of the scale, that seems quite simple.Just short of 90% of responses (of 824) said that the following broad guidance would be fine:


Simplest criteria

Similarly, at the other extreme, 92% felt that the following ‘slow-writing’ type model would not fit within the definition of ‘independent’:


Slow writing approach

This is all very well, but in reality, few of us would use such criteria for assessed work. The grey area in the middle is where it becomes problematic. Take the following example:


The disputed middle ground

In this case results are a long way from agreement. 45% of responses said that it would be acceptable, 55% not. If half of schools eschew this level of detail and it is actually permitted, then their outcomes are bound to suffer. By contrast, if nearly half use it but it ought not be allowed, then perhaps their results will be inflated. Of course, a quarter of those schools maybe moderated which could lead to even those schools with over-generous interpretations of the rules suffering. There is no consistency here at all.

The STA will do their best to temper these issues, but I really think they are insurmountable. At last week’s Rising Stars conference on the tests, John McRoberts of the STA was quoted as explaining where the line should be drawn:

That advice does appear to clarify things (such that it seems the 45% were probably right in the example above), but it is far from solving the problem. For the guidance is full of such vague statements. It’s clear that I ought not to be telling children to use the word “anxiously”, but is it okay to tell them to open with an adverb while also having a display on the wall listing appropriate adverbs – including anxiously? After all, the guidance does say that:


Would that count as independent? What if my classroom display contained useful phrases for opening sentences for the particular genre we were writing? Would that still be independent?

The same problems apply in many contexts. For spelling children are meant to be able to spell words from the Y5/6 list. Is it still okay if they have the list permanently printed on their desks? If they’re trained to use the words in every piece?

What about peer-editing, which is also permitted? Is it okay if I send my brightest speller around the room to edit children’s work with them. Is that ‘independent’?

For an assessment to be a fair comparison of pupils across the country, the conditions under which work is produced must be as close to identical as possible, yet this is clearly impossible in this case.

Moderation isn’t a solution

The temptation is to say that Teacher Assessment can be robust if combined with moderation. But again, the flaws are too obvious. For a start, the cost of moderating all schools is likely to be prohibitive. But even if it were possible, it’s clear that a moderator cannot tell everything about how a piece of work was produced. Of course moderators will be able to see if all pupils use the same structure or sentence openers. But they won’t know what was on my classroom displays while the children were writing the work. They won’t know how much time was spent on peer-editing work before it made the final book version. They won’t be able to see whether or not teachers have pointed out the need for corrections, or whether each child had been given their own key phrases to learn by heart. Moderation is only any good at comparing judgements of the work in front of you, not of the conditions in which it was produced.

That’s not to imply that cheating is widespread. Far from it: I’ve already demonstrated that a good proportion of people will be wrong in their interpretations of the guidance in good faith. The system is almost impossible to be any other way.

The stakes are too high now. Too much rests on those few precious numbers. And while in an ideal world that wouldn’t be the case, we cannot expect teachers to provide accurate, meaningful and fair comparisons, while also judging them and their schools on the numbers they produce in the process.

Surely it’s madness to think otherwise?

For the results of all eight samples of success criteria, see this document.


You’re not still teaching that are you?

This has become something of a recurring refrain over my teaching career, and it always – always – frustrates me.

Nobody ever says it about Science: “Oh, you’re not still teaching solids, liquids and gases, are you?”. Or music: “Oh, you’re not still teaching standard notation, are you?” And yet for some reason it seems to abound in other areas – especially English.(Even maths seemed to go through a phase where the standard basics were frowned upon!) But such decisions are often distinctly personal.

The first time I read Holes by Louis Sachar, I couldn’t wait to get planning for it, and was desperate to start teaching it. Now, having taught it too many times for my own liking, I’m tired of it. I suspect that this will be my last year of tackling it because I’ve lost my love for it. But for my class this year, it was their first time of approaching it. It was fresh for them. The only reason to abandon it is that my waning love for it risks coming through in the teaching.

But that won’t stop somebody somewhere from saying “Oh, but you’re not still teaching Holes, are you?”

It happens too often.

Tonight I’ve seen the same said of both The Highwayman and the animation The Piano. Now for sure they’ve both had more than their fair share of glory, but there was a reason why they were chosen in the first place. I’m all in favour of people moving away from them, finding better alternatives, mixing things up a bit. But they don’t cease to be excellent texts just because they’ve been done before. Every Year 5 child who comes to them does so for the first time.

I’ve heard the same said before of The Lighthouse Keeper’s Lunch at KS1 -as though somehow the fact that a topic has worked brilliantly in the past should be ignored simply because a consultant is over-familiar with it.

Of course, there are reasons to ditch texts. Sometimes they become outdated. Sometimes they cease to match the curriculum. Sometimes the ability of the children demands more stretch. Sometimes something much better comes along. Sometimes you’re just sick of them.

I’ve never cared for Street Child even though it’s wildly popular. I’ve always found Morpurgo’s work irritating. But if others find them thrilling, and get great results with their classes, then so be it. Who am I to prevent them teaching them?

As somebody also responded on Twitter this evening: the best “hook” is the teacher. If a teacher feels passionately about a poem, a book, or a topic, then it can be a great vehicle for the teaching that surrounds it. And if we make them all ditch those popular classics merely because they’re popular, then you’d better have a damned good replacement lined up to offer them!

A consistent inconsistency

With thanks to my headteacher for inadvertently providing the blog title.

With Justine Greening’s announcement yesterday we discovered that the DfE has definitely understood that all is not rosy in the primary assessment garden. And yet, we find ourselves looking at two more years of the broken system before anything changes. My Twitter timeline today has been filled with people outraged at the fact that the “big announcement” turned out to be “no change”.

I understand the rage entirely. And I certainly don’t think I’ve been shy about criticising the department’s chaotic organisation of the test and errors made. But I’m also not ready to throw my toys out of the pram just yet. This might just be the first evidence that the department is really listening. Yes, perhaps too little too late. Yes, it would have been nice for it to have been accompanied by an acknowledgement that the problems were caused by the pace of change enforced by ministers. But maybe they’re learning that lesson?

For a start, there are many teachers nationally who are just glad of the consistency. As my headteacher said earlier today, it leaves us with a consistent inconsistency. But nevertheless, there will be many teachers who are relieved to see that the system is going to be familiar for the next couple of years.

It’s a desire I can understand, but just can’t go along with. There are too many problems with the current system – mostly those surrounding the Teacher Assessment frameworks and moderation. But I will hang fire, because there is the prospect of change on the horizon.

It’s tempting to see it as meaningless consultation, but until we see the detail I don’t want to rule anything out. I hope that the department is listening to advice, and is open to recommendations – including those which the NAHT Assessment Reform Group of which I am a member is drawing together over this term.

If the DfE listens to the profession, and in the spring consults on a meaningful reform that brings about sensible assessment and accountability processes, then we may eventually come to see yesterday’s announcement as the least bad of the available options.

Of course, if they mess it up again, I’ll be on their case.

The potential of Comparative Judgement in primary

I have made no secret of my loathing of the Interim Assessment Frameworks, and the chaos surrounding primary assessment of late. I’ve also been quite open about a far less popular viewpoint: that we should give up on statutory Teacher Assessment. The chaos of the 2016 moderation process and outcomes was an extreme case, but it’s quite clear that the system cannot work.

It’s crazy that schools can be responsible for deciding the scores on which they will be judged. It has horrible effects on reliability of that data, and also creates pressure which has an impact on the integrity of teachers’ and leaders’ decisions. What’s more, as much as we would like for our judgements to be considered as accurate, the evidence points to a sad truth: humans (including teachers) are fallible. As a result, Teacher Assessment judgements are biased – before we even take into account the pressures of needing the right results for the school. Tests tend to be more objective.

However, it’s also fair to say that tests have their limitations. I happen to think that the model of Reading and Maths tests is not unreasonable. True, there were problems with this year’s, but the basic principles seems sound to me, so long as we remember that the statutory tests are about the accountability cycle, not about formative information. But even here there is a gap: the old Writing test was scrapped because of its failings.

That’s where Comparative Judgement has a potential role to play. But there is some work to be done in the profession for it to find its right place. Firstly we have to be clear about a couple of things:

  1. Statutory Assessment at the end of Key Stages is – and indeed should be – separate from the rest of assessment that happens in the classroom
  2. What we do to judge work, and how we report that to pupils and parents are – and should be – separate things.

Comparative Judgement is based on the broad idea of comparing lots of pieces of work until you have essentially sorted them into a rank order. That doesn’t mean that individuals’ ranks need be reported, any more than we routinely report raw scores to pupils and parents. It does, though, offer the potential of moving away from the hideous tick-box approach of the Interim Frameworks.

Teachers are understandably concerned by the idea of ranking, but it’s really not that different from how we previously judged writing. Most experienced Y2/Y6 teachers didn’t spend hours poring over the level descriptors, but rather used their knowledge of what they considered L2/L4 to look like, and judged whether they were looking at work that was better or worse. Comparative Judgement simply formalises this process.

It particularly tackles the issue that is particularly prevalent with the current interim arrangements: excellent writing which scores poorly because of a lack of dashes or hyphens (and poor writing which scores highly because it’s littered with them!). If we really want good writing to be judged “in the round”, then we cannot rely on simplistic and narrow criteria. Rather, we have to look at work more holistically – and Comparative Judgement can achieve that.

Rather than teachers spending hours poring over tick-lists and building portfolios of evidence, we would simply submit a number of pieces of work towards the end of Year 6 and they would be compared to others nationally. If the DfE really wants to, once they had been ranked in order, they could apply scaled scores to the general pattern, so that pupils received a scaled score just like the tests for their writing. The difference would be that instead of collecting a few marks for punctuation, and a few for modal verbs, the whole score would be based on the overall effect of the piece of writing. Equally, the rankings could be turned into “bands” that matched pupils who were “Working Towards” or “Working at Greater Depth”. Frankly, we could choose quite what was reported to pupils and parents; the key point is that we would be more fairly comparing pupils based on how good they were at writing, rather than how good they were at ticking off features from a list.

There are still issues to be resolved, such as exactly what pieces of writing schools would submit for judgement, and the tricky issue of quite how independent the work should be. Equally, the system doesn’t lend itself as easily to teachers being able to use the information formatively – but then, aren’t we always saying that we don’t want teachers to teach to the tests?

Certainly if we want children’s writing to be judged based on its broad effectiveness, and for our schools to be compared fairly for how well we have developed good writers, then it strikes me that it’s a lot better than what we have at the moment.

Dr Chris Wheadon and his team are carrying out a pilot project to look at how effective moderation could be in Year 6. Schools can find out more, and sign up to join the pilot (at a cost) at: