I caused a bit of an upset today. As too easily happens, I saw a conversation via Twitter that raised concerns with me, and I rushed in with 140 characters of ill-thought-through response.
Some very knowledgeable experts in the field of school data management were trying – quite understandably – to get their heads round how a life after levels will look in terms of managing data and tracking in schools. As David Pott (@NoMoreLevels) put it: “trying to translate complex ideas into useable systems”.
My concern is that in too many cases, data experts are being forced to try to find their own way through all this, without the expert guidance of school leaders (or perhaps more importantly, system leaders) to highlight the pitfalls, and guide the direction of future developments. That’s not to say that the experts are working blind, but rather that they are being forced to try to work out a whole system of which they are only a part.
Of course, the problem is that without first-hand knowledge of some of those areas, the data experts are forced to rely on their knowledge of what went before. And as seems to be the case in so many situations at the moment, we run the risk of creating a system that simply mirrors the old one, flaws and all. We need to step back and look at the systems we actually need to help our schools to work better in the future. And as with all good design projects, it pays to consider the needs of the end user. Inevitably, with school data, there are always too many users!
Therefore, here is my attempt – very much from a primary perspective, although I daresay there are many parallels in secondary – to consider who the users are of data and tracking, and what their needs might be in our brave new world.
The Classroom Teacher
This is the person who should be at the centre of all discussions about data collection. If it doesn’t end up linking back to action in the classroom, then it is merely graph-plotters plotting graphs.
In the past, the sub-level has been the lot of the classroom teacher. Those meaningless subdivisions which tell us virtually nothing about the progress of students, but everything about the way in which data has come to drive the system.
As a classroom teacher, I need to know two things: which children in my class can do ‘X’, and which cannot? Everything else I deal with is about teaching and learning, be that curriculum, lesson planning, marking & feedback, everything. My involvement in the data system should be about assessment, not tracking. I have spoken many times about this: Tracking ≠ Assessment
Of course, at key points, my assessment should feed into the tracking system, otherwise we will find ourselves creating more work, but whether that be termly, half-termly or every fortnight, the collection of data for tracking should be based on my existing records for assessment, not in addition to it.
We have been fed a myth that teachers need to “know their data” to help their students make progress. This is, of course, nonsense. Knowing your data is meaningless if you don’t know the assessments that underpin it. Knowing that James is a 4b tells you nothing about what he needs to do to reach a 4a. A teacher needs to know their assessments: whether or not James knows his tables, or can carry out column subtraction, or understands how to use speech marks. None of this is encapsulated in the data; it is obscured by it.
Children do not need to know where they are on a relative scale compared to their peers, or to other schools nationally. What matters to children in classrooms is that they know what they can do, what they need to do next, and how to do that. All of that comes directly from teachers’ assessments, and should have no bearing on data and tracking (or perhaps, more importantly, the methods of tracking should have no bearing on a child’s understanding of their own attainment).
Too many schools have taken the message about students knowing where they are and what to do next as an indication that they should be told their sub-level. This doesn’t tell children anything about where they are, and much less about what to do next.
The School Leader
As a department, year team or senior leader, it is very rarely feasible for any one person to have a handle on the assessment outcomes for individual students; that is not their role.
This is the level at which regular tracking becomes important. It makes sense for a tracking system to highlight the numbers of children in any class group who are on-track – however that might be measured. It might also highlight those who are below expectations, those who are above, or those who have made slower progress. It should be possible, again, for all of this to come from the original assessments made by teachers in collated form.
For example, if using the Key Objectives approach, collation might indicate that in one class after half a term, 85% of students have achieved at least 20% of the key objectives, while a further 10% have achieved only 15% of the objectives, and some 5% are showing as achieving less than that. This would highlight the groups of children who are falling behind. It might be appropriate to “label” groups who are meeting, exceeding, or falling below the expected level but this is not a publication matter. It is for school tracking. There is nothing uncovered here that a classroom teacher doesn’t already know from his/her assessments. There is nothing demonstrated here that impacts on teaching and learning in classrooms. It may, however, highlight system concerns, for example where one class is underperforming, or where sub-groups such as those receiving the pupil premium are underperforming. Once these are identified, the focus should move back to the assessment.
In the past, the temptation was to highlight the percentage of children achieving, say L4, and to set then a target to increase that percentage, without any consideration of why those children were not yet achieving the level. All of these targets and statements must come back to the assessment and the classroom teacher.
Of course, senior leaders will also want to know the number of children who are “on-track” to meet end-of-key-stage expectations. Again, it should be possible to collate this based on the assessment processes undertaken in the classroom.
What is *not* required, is a new levelling system. There is no advantage to new labels to replace the old levels. There is no need for a “3b” or “3.5” or any other indicator to show that a student is working at the expected level for Year 3. Nobody needs this information. We have seen how meaningless such subdivisions become.
Of course, the devil is in the detail. What percentage of objectives would need to be met to consider a child to be “on track” or working at “age-related expectations”? Those are professional questions, and it is for that reason that it is all the more important that school and system leaders are driving these discussions, rather than waiting for data experts to provide ready-made solutions.
Frankly, we shouldn’t really need to consider Ofsted as a user of data, but the reality is that we currently still do. That said, their needs should be no different from school leaders. They will already have the headline data for end-of-key-stage assessments. All they should need to know from internal tracking and assessment is:
- Is the school appropriately assessing progress to further guide teaching and learning?
- Is the school appropriately tracking progress to identify students who need further support or challenge?
The details of the systems should be of no concern of Ofsted, so long as schools can satisfy those two needs. There should be no requirement to produce the data in any set form or at any specific frequency. The demands in the past that schools produce half-termly (or more frequent!) tracking spreadsheets of levels cannot be allowed to return under the new post-levels systems.
Parents were clearly always the lost party in the old system, and whether or not you agree with the DfE’s assessment that parents found levels confusing, the reality is that the old system was obscure at best. It told parents only where their child was in a rough approximation of comparison to other students. It gave no indication of the skills their child had, or their gaps in learning.
For the most part, the information a parent needs about their child’s learning is much the same as that that their child needs: the knowledge of what they can and can’t do, and what their next steps are. Of course, parents may be interested in a child’s attainment relative to his/her age, and that ought to be evident from the assessment. Equally, they may like to see how they have progressed, and again assessment against key objectives demonstrates that amply.
So where next?
We are fortunate in English schools to be supported by so many data experts with experience of the school system. However, they should not – indeed they must not – be left to try to sort out this sorry mess alone. School leaders and school system leaders need to take a lead in this. Schools and their leaders need to take control of the professional discussions about what we measure when we’re assessing, and about what we consider to be appropriate attainment based on those assessments. Only then can the data experts who support our schools really create the systems we need to deliver on those intentions.