Monthly Archives: March 2017

Some thoughts on the Primary Assessment Consultation

Pub Quiz question for the future: In what year did the primary assessment framework last not change? (Answers on a postcard, folks)

I may not always be the most complimentary about the DfE, but today I feel like there is a lot to praise in the new consultation on primary assessment. They have clearly listened to the profession, including the work undertaken by the NAHT assessment review, and have made some sensible suggestions for the future of primary assessment. As ever, I urge people to read the consultation, and respond over the next 12 weeks. Here, I’ve just shared a few thoughts on some key bits:

Assessment in the Early Years

For years, I feel like Early Years practice was held up as a shining example of assessment, as we were all wowed by their post-it notes and online apps, and all the photographs they took. I was never overly keen on all the evidence-collating, and I’m pleased that we’ve begun to eschew it in the Key Stages. It’s pleasing, therefore, to see that while the department is happy to keep the (actually quite popular) Early Years Profile, it wants advice on how the burden of assessment can be reduced in the Early Years.

I’m also pleased to see the revival of the idea of a Reception baseline. Much damage was done by the chaotic trial of different systems in 2015, but the principle remains a sensible one to my mind. I would much rather see schools judged on progress across the whole of the primary phase. It’s also quite right that baseline data shouldn’t be routinely published at school or individual level. The consultation seems open to good advice on how best to manage its introduction (an approach which might have led to greater success with the first attempt!).

Key Stage 1

I wasn’t certain that we’d ever persuade the DfE to let go of a statutory assessment, but it seems that they’re open to the idea. I do think that the KS1 tests – and the teacher assessment that goes along with them – are a barrier to good progress through the primary years, and I’d welcome their abandonment. The availability of non-statutory tests seems a sensible approach, and I’m happy to see that the department will consider sampling as a way to gather useful information at a national level. Perhaps we might see that rolled out more widely in the long term.

I’d have rather seen them take the completely radical option of scrapping the statutory tests straight away, but I can see the rationale for keeping them until the baseline is in place. Unfortunately that means we’re stuck with the unreliable Teacher Assessment approach for the time being. (More of that to follow)

Key Stage 2

Of course it makes sense to scrap statutory Teacher Assessment of Reading and Maths. Nobody pays it any heed; it serves no purpose but adds to workload. I’d have preferred to see Science go the same way, but no such luck. At the very least, I hope there is a radical overhaul of the detail in the Science statements which are currently unmanageable (and hence clearly lead to junk data in the extreme!)

There is also some recognition in there that the current system of Teacher Assessment of Writing is failing. The shorter term solution seems to be a re-writing of the interim frameworks to make them suit a best-fit model, which is, I suppose, an improvement. Longer term, the department is keen to investigate alternative (better) models; I imagine they’ll be looking closely at the trial of Comparative Judgement at www.sharingstandards.com this year. I’m less persuaded by the trial of peer-moderation, as I can’t quite see how you could ensure that a fair selection of examples are moderated. My experience of most inter-school moderation is that few discussions are had about real borderline cases, as few teachers want to take such risks when working with unfamiliar colleagues. Perhaps this trial will persuade me otherwise?

On the matter of the multiplication check, I don’t share the opposition to it that many others do. I’ve no objection to a sensible, low-stakes, no-accountability check being made available to support schools. I’d prefer to see it at the end of Year 4 – in line with the National Curriculum expectations, and I’d want to see more details of the trials, but overall, I can live with it.

Disappointments

Although it hardly gets mentioned, the opening statement that “it is right that the government sets a clear expected standard that pupils should attain by the end of primary school” suggests that the department is not willing to see the end of clunky descriptors like “Expected Standard”. That’s a shame, as the new scaled score system does that perfectly well without labelling in the same way. Hopefully future alternatives to the current Teacher Assessment frameworks might lessen the impact of such terminology.

Credit for whoever managed to get in the important fact that infant/junior and middle schools still exist. (Points deducted for failing to acknowledge first schools in the mix). However, the suggestions proposed are misguided. The consultation claims that,

the most logical measures for infant schools would be reception to key stage 1 and, for middle and junior schools, would be to continue with key stage 1 to key stage 2

While that may be true for infant, and potentially even junior schools, for middle schools this is a nonsense. Some middle schools only start from Year 6. How can it be sensible to judge their work on just 2 terms of a four-year key stage? The logical measure would require bespoke assessments on entry and exit. That would be expensive, so alternatives will be necessary. Personally I favour using just the Reception baseline and KS2 outcomes, along with sensible internal data for infant/first and junior/middle schools. The KS1 results have never been a helpful or reliable indicator.

Partly connected to that, I would also have liked to have seen a clearer commitment to the provision of a national assessment bank, as proposed by the Commission for Assessment without Levels, and supported by the NAHT review. It does get a brief mention in a footnote, so maybe there’s hope for it yet.

In Conclusion

Overall, I’m pleased with the broad shape of the consultation document. It does feel like a shift has happened within the department, and that there is a clear willingness to listen to the profession and correct earlier mistakes. There is as much positive news in the consultation as I might have hoped for.

If there were an interim assessment framework for judging DfE consultations, then this would have ticked nearly all of the boxes. Unfortunately, of course, nearly all is not enough, as any primary teacher knows, and so it must fall to WTS. Seems cruel, but he who lives by the sword…

Some clarity on KS2 Writing moderation … but not a lot

Not for the first time, the Department has decided to issue some clarification about the writing assessment framework at Key Stage 2 (and its moderation!). For some inexplicable reason, rather than sharing this clarity in writing, it has been produced as a slowly-worded video – as if it were us that were stupid!

Here’s my take on what it says:

Some Clarity – especially on punctuation

  • For Greater Depth, the long-winded bullet point about shifts in formality has to be seen in several pieces of work, with more than one shift within each of those pieces.
  • For Expected Standard, it is acceptable to have evidence of colons and semi-colons for introducing, and within, lists (i.e. not between clauses)
  • For Expected Standard, any of either brackets, dashes or commas are acceptable to show parenthesis. There is no need to show all three.
  • Bullet points are punctuation, but the DfE is pretending they’re not, so there’s no need to have evidence of them as part of the “full range” of punctuation needed for Greater Depth.
  • Three full stops to mark ellipsis are also punctuation, but again, the DfE has managed to redefine ellipsis in such a way that they’re not… so again, not needed for Greater Depth.

A bit of guidance on spelling

This was quite clear: if a teacher indicates that a spelling needs correcting by writing a comment in the margin on the relevant line, then the correction of that spelling cannot be counted as independent. If the comment to correct spellings comes at the end of a paragraph or whole piece, without specifying what to correct, then it can still count as independent.

No clarity whatsoever on ‘independence’

Believe me, I’ve re-watched this several times – and not all of them at double-speed – and I’m still bemused that they think this clarifies things. The whole debacle is still reliant on phrases like “over-scaffolding” and “over-detailed”. Of course, if things are over-detailed then there is too much detail. What isn’t any clearer is how much detail is too much detail. The video tells us that:

“success criteria would be considered over-detailed where the advice given directly shapes what pupils write by directing them to include specific words or phrases”

So we know specifying particular words is too much, but is it okay to use success criteria which include:

  • Use a varied range of sentence structures

Is it too specific to include this?

  • Use a varied range of sentence openers

What about…?

  • Use adverbs as sentence openers

There’s a wide gulf between the three examples above. Which of these is acceptable? Because if it’s the latter, then schools relying on the first will find themselves under-valuing work – and vice versa, of course. That’s before you even begin to consider the impossibility of telling what success criteria and other supporting examples are available in classrooms at the time of writing.

The video tries to help by adding:

“success criteria must not specifically direct pupils as to what to include or where to include something in their writing”

But all of those examples are telling children what to include – that’s the whole point of success criteria.

If I’ve understood correctly, I think all three of those examples are acceptable. But it shouldn’t matter what I think: if the whole system depends on what each of us thinks the guidance means, then the consistency necessary for fair and useful assessment is non-existent.

The whole issue remains a farce. Doubtless this year Writing results will rise, probably pushing them even higher above the results for the externally tested subjects. Doubtless results will vary widely across the country, with little or no relationship to success in the tested subjects. And doubtless moderation will be a haphazard affair with professionals doing their best to work within an incomprehensible framework.

And to think that people will lose their jobs over data that results from this nonsense!


The full video in all its 11-minute glory can be found at: https://www.youtube.com/watch?v=BQ-73l71hqQ