This week the NAHT published its report into the new assessment landscape following the abandonment of National Curriculum levels. Impressive, really, considering the fact that the DfE still hasn’t managed to provide its report on the consultation that closed four months ago!
The NAHT committee had much of sense to say, but I didn’t agree with all of it. I agree that it makes sense for schools to work in clusters to create and share assessment processes; I agree that we need a clear lead from the DfE & Ofsted on how we are to use whatever processes are put in place; I agree that assessment should be based on specified criteria, rather than simply ranking pupils; I also agree that it is unreasonable to expect schools to have everything in place for September, and so the interim use of levels is probably unavoidable, and perhaps even advisable for a short period.
It’s worth noting here that David Thomas (@dmthomas90) has written an excellent blog summarising his views on the report, and I agree with the vast majority of what he says about its strengths and weaknesses, although we disagree on the solution!
In particular, I welcomed the NAHT’s recommendation that whatever systems schools use, they should ensure that they
“assess pupils against assessment criteria, which are short, discrete, qualitative and concrete descriptions of what a pupil is expected to know and be able to do.”
The new National Curriculum – particularly at primary level – makes it clear what is expected to be taught in each phase, and in many cases in each year group for the core subjects. This helps to provide a starting point for identifying those assessment criteria. [Note 1]
However, where I part company from the NAHT report and others who have recommended similar processes is in the following recommendation:
“Each pupil is assessed as either ‘developing’, ‘meeting’ or ‘exceeding’ each relevant criterion contained in our expectations for that year.”
At first glance, this seems a reasonable suggestion, and is fairly based on the evidence that the NAHT received that there was little appetite for a simple binary system. However, what concerns me is that we will end up with a poor hybrid assessment vehicle. There are a couple of reasons for this.
Firstly, I suspect that the reason teachers were so reluctant to consider a “binary” is because of their long experience of sub-levels in the current system. Progress is hard to demonstrate across a whole national curriculum levels, and so the bastard sub-levels were introduced. However, if the NAHT proposal that criteria are “short, discrete, qualitative and concrete“, then this issue should not remain. Secondly, I suspect that by adding such vague statements as “developing”, we risk corrupting the most effective aspect of such a system.
Take, as an example, a simple objective that might be counted as short, discrete, qualitative and concrete. A school might include one criterion that a child can recall the 6x table. It seems that most of us could agree that this could be demonstrated by responding to random quick-fire questions as have been used for centuries. However, at what stage do we argue that a child is “developing” this skill? When they can count on in sixes mentally? When they can count up in sixes aloud? When they can recall at least half of the tables up to 12? And similarly, what constitutes exceeding that objective? Surely once you can do it, you move on to new objectives?
The only reason I can see for the developing/meeting/exceeding sub-grades is if we replace the current system of levels with a system of… well, levels. But even then, the criteria become too vague. After all, one objective then might be knowledge and recall of all tables to 12×12. But you could argue that as soon as a child counts in 2s, they are beginning to develop that knowledge. And when might we consider them exceeding it? When they learn the 13x table?
The major downfall of levels, in my opinion, was that their broadness became a failing when we moved to increasingly regular measures of progression. If we end up with statements that are so broad as to require splitting into thirds in the way proposed, then we might just as well simply update the levels.
So, what do I suggest as an alternative? Well, not for the first time, I’m going to suggest looking back to the National Numeracy Strategy of the late 1990s. The strategies were responsible for a lot of nonsense, but the NNS had a couple of real strengths, one of the greatest of which was its exemplification pages.
These pages not only provided examples for teachers to explore the ways in which statements could be interpreted (which is often the first challenge of interpreting a curriculum document), but more importantly they provided an overview of progression across year groups. It made it clear for teachers where their next steps were when children had met a particular outcome, and where steps could be put in place for those struggling.
If the NAHT is to go ahead with its proposal to work on a model document to create suitable assessment criteria, then hopefully it might begin too see that such an approach could be far more meaningful than the vague descriptors of development/meeting/exceeding.
A progression of key strands across the subject may run the risk of re-inventing the many attainment targets of of the late 1980s, but short measurable objectives could be organised in a manner to support progression and support teachers in identifying how best to support students who might be deemed to be ‘developing’ or ‘exceeding’ common objectives for the year groups.
Some examples of objectives organised as possible outcomes in a progression for primary schools are shown:
It is very much a hastily knocked-up suggestion, far from a finished model. In an ideal world, groups of experts and specialists could work together to organise progression documents like the NNS example which could genuinely support teachers, and provide a framework of assessment exemplifications.
It would certainly be far more purposeful than simply using vague verbs to break up content.
[Note 1] Of course, much of this work might have been avoided if the DfE had followed the recommendation of the Expert Panel that it set up. Its recommendation is perfect:
Programmes of Study could then be presented in two parallel columns. A narrative, developmental description of the key concept to be learned (the Programme of Study) could be represented on the left hand side. The essential learning outcomes to be assessed at the end of the key stage (the Attainment Targets) could be represented on the right hand side. This would better support curriculum-focused assessment.