I love the theory of themed chats on Twitter, but I’m hopeless at the practice of them. I never remember when they are, I’m often not available, I still haven’t mastered using Tweetdeck without getting very frustrated by the pace, and often find too much noise and not enough coherence. And of course, I usually feel I have too much to say, as is my way. But I know they’re very fruitful for others, so I genuinely hope that the #primedchat on Wednesday evenings (makes mental note) continues to thrive. I will try to contribute during, but I’ve also decided to put down some thoughts here in advance on the theme of How do you make assessment useful without it taking over your life?
Assessment is such an important part of the job that we do, but I do fear that too often we end up prioritising assessment for assessment’s sake, or for other reasons, rather than focussing on the key purpose of assessment: improving learning. My main bugbear in this respect is the dreaded needed for tracking. Tracking is really important, and vital for schools if they are to be able to ensure that students make good progress over longer periods of time. However, tracking is of very little use to me as a class teacher. I have no interest in whether Abigail is now a 3a or a 4c in Writing. What matters to me is exactly what she can and can’t do – and those numbers don’t tell me that! I’ve written more about getting this balance right here: Tracking ≠ Assessment.
What is useful to me as a classroom teacher is a knowledge of what children in my class can do, and what they need to do next. In some respects – and I say this with many caveats, only a fraction of which I’ll discuss here – the APP materials were useful. The breakdown of progression in Reading and Writing particularly was made quite explicit by those materials, and in lots of cases my knowledge of that content has helped to improve my teaching. Indeed, I’d go as far as to say that until I became familiar with the content of AFs 4-7 on the Reading APP grid, I was probably a pretty poor teacher of reading. But I still don’t find it useful as an assessment tool. The idea that somehow quality of writing, or capability in reading can be simplified to a process of highlighting statements and then working through a flowchart to allocate a number strikes me as laughable. In most schools I have seen, the experience of APP is a means to an end: a way of turning a teacher’s knowledge of his/her students into a computable number.
Let me be clear, then: APP is not a solution. Staff discussions about whether an individual child has “sustained an awareness of the reader” in his writing, or what the difference is between “basic features of organisation” and “various features of organisation” are not helpful for teaching and learning. The attempt to break down very complex processes into a measurable number of small steps has not worked. It may have allowed us to claim a sense of accuracy when deciding whether to stick a 3b or 3a label into a spreadsheet, but that is an illusion. And it doesn’t move learning on.
So if not that, then what?
I have said before, and will argue again that we need to separate the processes of teacher-led assessment from larger scale tracking and reporting. That’s not to say that the former won’t feed into the latter; but it is important that the tracking/reporting need doesn’t drive the assessment. Some children need to focus on skills, or be taught things, which do not appear on an APP grid. They might be steps that do not affect their ‘level’, but which are essential to their ability to move on. Maths is a classic example: The Level 4 APP statement that children need to know their tables up to 10×10 is of no use to the child who needs a target to learn their 5x table. The steps pretend to be small and manageable, but the reality is otherwise.
We need to be realistic about this. We need to admit that assessable steps are necessarily larger than the tiny incremental changes that happen in classrooms. The tiny steps are the grit of daily teaching, but to attempt to make such tiny measures into a recordable and reportable system is an error. We need to be realistic about what can be achieved at each level.
A school has a responsibility to move its children from the expected level on entry to Year 1 to the expected level on exit at Year 6. Within that, you might reasonably set key marker points at which judgements can be made against progress towards those goals. The original levels statements were intended to do this. The new year-based National Curriculum allows us to do something similar. It might be reasonable to judge whether a child has met a majority of the expected intended learning targets for a given year. Anything smaller than that will depend on the structuring of the curriculum. If I hammer the teaching of speech marks in term one of Y3 then it would perfectly reasonable to expect most children to be able to use them by Christmas. If I choose to leave it until summer then the outcomes will be different – but doubtless children will have learned something else in the meantime. The sequence is not – indeed cannot – be fixed.
If we really want assessment to be useful, then we need to consider its many different stages. I like the comparison to planning. Most schools are happy to follow a nationally-agreed curriculum for each key stage; many are even happy for it to be broken down into year groups. But the QCA unit plans showed us how central planning at the smaller scale was disappointing to say the least. At that level, teachers who know their children need to take the lead. The same is true of assessment. Here is a model I have referred to before:
At the national level, it’s perfectly reasonable for the central government to set expectations. Indeed, even at school level, schools may happily adopt the yearly expectations of the new programmes of study – at least for maths and science. But below this level, schools must drive both planning and assessment in tandem.
At the medium term level, it makes sense to me that schools devise their own medium-term plans and that the assessment outcomes are explicit at that stage. Many (most?) schools plan half-termly units of work, and it seems to me that this is sufficiently long a period of time to also set some expected outcomes. Teachers can devise a plan with some key expected outcomes and use these to assess progress at the end of the unit. It is assessment like that that can be shared with parents – a clear-to-see map of what a child can and cannot do. It is assessment like that that can help teachers to track children’s progress towards expected outcomes at the end of the key stage, and more importantly to highlight at a relatively early stage those who are at risk of falling behind expectations.
The short-term level is probably the most important both for teaching and assessment. A teacher who is clear about expected outcomes for the term and the year can plan appropriate lessons, and more importantly set appropriate short-term targets for the children that he/she knows well. The targets can be meaningful for individual children, easily monitored as part of the unit’s work (particularly if schools adopt a mastery type approach) and significantly can focus on the small steps needed for the children in that class. Significantly, though, these short-term plans and assessment outcomes are unlikely to be useful for tracking. There is no sense in attempting to record every detail of them, any more than there is in insisting on individual lesson plans for every lesson.
So how does this answer the question of making assessment manageable? There are a few key things that help to achieve this, in my opinion:
- Separate tracking from class-level assessment
- Link assessment expectations directly to medium term planning
- Don’t expect recorded assessment in the short term
If this is the model schools use, then assessment can comfortably be carried out half-termly in most areas, by focussing on the key progress made. Not by creating new sub-levels or scores, but by actually reviewing what the children can and cannot do! (You might even want to do a test!)