Tag Archives: tracking

Life after levels… although not quite yet.

Back in May I carried out a straw poll via Twitter asking people about their use of tracking software in primary schools, and was disappointed to find that nearly half of responses showed that schools were still using a tracking system that required 3 or 6 steps of progress each year. Despite all the efforts to move away from levels, to avoid the race for progress, to stop the labelling and grouping of pupils, it seemed that many schools had simply replicated the old system.

Since then, we’ve had further guidance from the DfE in the form of the Assessment Commission report, and widespread sharing of videos from Tim Oates and Sean Harford. Many more schools, who in May had not made a decision on software, will now have gone ahead with something new, so I repeated the survey to see if things had changed.

Things are not looking good. Back in May around 45% of schools had opted for a 3-step or 6-step model (replicating the old sub-levels and points-based systems). In the latest survey, that proportion has risen to 49%, with another 15% using another number of steps.

trackingsteps

It seems we’re a long way off “assessment without levels”.

As last time, I also polled people to find out which tracking systems were in use. The following graph shows those software programs which had 10% of more of the responses:

trackingsoftware

As before, Target Tracker makes up nearly a quarter of responses, with its system that allows a choice between 3 or 6 point measurement!

Interestingly, although some software options do not themselves impose (or even include) a step-tracking model, some schools have clearly adopted their own in addition to their main tracking approach. Evidence – as if more were needed – that the fear in schools about proving measureable progress remains as clear as ever.

What’s the alternative?

I’ve made quite clear before my preference for a Key Objective approach to tracking. As Dylan Wiliam says, when it comes to assessment & tracking, we need to focus on the ‘big ideas’ -the things that really matter.

My worry at this stage is for the ‘formative’ elements of these tracking systems – even those that don’t use a steps model. Many offer a combination of formative and summative tracking, which includes breaking down the whole national curriculum into clickable steps. By my estimation, that could leave a typical teacher in KS2 with over 100 statements to be clicking for each pupil. 3000 statements a year to be ticked off.

For schools still struggling with this idea, I’d urge them (as I have before) to take a look at the NAHT’s approach to Key Performance Indicators. It makes much more sense to focus on high quality assessment of fewer things; otherwise teachers will just be run ragged trying to tick all the boxes.

As Tim Oates has said: it’s not that we need less assessment; rather we need more assessment of the right things. It seems we haven’t quite got there yet.

Have we forgotten the rationale for scrapping levels?

It’s been a long time under discussion, and yet for all the talk in school and out, it seems that many have forgotten the original rationale for the scrapping of levels.
Tim Oates set the explanations out very clearly in the video presented on the DfE YouTube channel (which if you haven’t seen, is worth looking out). The main thrusts of the argument fell into three categories:

  1. Children were self-labelling
  2. Undue pace was forced into the curriculum
  3. The comparability of test scores, best-fit judgements and ‘just in’ measures

But when we look at the majority of systems that have replaced levels, have we really moved on?

My poll of systems being used in primary schools suggests not. The most popular tracking packages appear to make up the majority of school systems, although it’s interesting to see a large proportion of bespoke systems as well as many schools with none. However, despite this variety, one message stands out: from the entirely non-scientific poll I’ve run, it seems that half of schools are still depending on systems that require 3 or 6 points of progress to be measured each year. How does this tackle those initial problems with levels?

Have we simply replaced the self-labelling of “I’m a Level 3”, with “I’m Emerging”? Indeed, in some such systems, might we not run the risk that a child remains permanently as “Emerging”, labelled not only in comparison to his peers, but indeed as a permanent characteristic. Whether the language is “developing”, or “beginning” or “below”, might not the effect be the same or worse than with levels?

As for undue pace, surely by demanding steps of progress again, we’ve simply replicated the same old problems? In fact, I’d argue that we’ve worsened them. Having got so used to the APP model, many schools have now adopted a system that requires the recording of endless theoretically-formative judgements in order to reach a summative point score or category. Once again the risk is that pupils near to thresholds will become the focus, rather than those who most need additional support, and that moving more quickly through the steps will be seen as positive, neglecting the need to secure understanding and skills.

So have we solved the problem of the different meanings of levels? Sadly, the same symptoms are evident: schools and tracking companies have tried to replicate old systems. Rather than focussing on what children can and can’t do, too much time and energy is focussed on predicting the resulting summative judgement. It’s true that the new interim assessment frameworks remove the ‘best-fit’ judgement issue (although I’m not convinced that’s a good thing!), but we still have many systems that focus on using a best-fit approach to summarise judgements using a category label. If you need to get a certain number of ticks to be placed in a particular category, then surely we might as well have stuck with APP?

So what’s the solution?

I’m increasingly coming to the view that our first task should be to separate formative and summative assessment entirely. The current systems just aren’t working.

Plus ça change… plus c’est la même chose

The French phrase seems entirely fitting when talking about tackling ‘assessment without levels’. Increasingly it has become clear that having seen levels clearly rejected by experts like Tim Oates and even the DfE themselves, most schools have found themselves re-creating a system in its image. And so it was that I set out to survey a not-entirely-scientific group of twitter users about their tracking systems.

In fact, I was disappointed to be pleasantly surprised by the results of my little poll. Firstly, the easy bit – what tracking programs are schools using? Obviously, on a relatively small sample (325) taken from a poll on Twitter, this isn’t entirely representative, but may be indicative:

TrackersIt’s clear that there are some very popular products, but interesting to see that 6% of responding schools had designed their own system, and over 10% had no system at all. It isn’t clear, of course, whether this 10% have made a decision not to buy something in, or simply haven’t decided which other product to purchase yet.

Removing those who had indicated that they had no tracking system, I then looked with interest at the progress measures used. My fear has been that most schools would have replaced the old system of 1½ sub-level / 3 APS points a year with something very similar. It was for that reason I was so pleasantly surprised that the most popular response from the survey was that systems required no set measure of steps each year. However, those are closely followed by the 3- and 6-step models:

Progress

In fact, when I looked more closely, it soon became clear that steps have remainder the dominant model, and the familiar ‘one step per term/half-term’ approach remains the most popular. In fact, this approach accounts for almost half of those who gave an answer, with steps models making up around 2/3 of responses altogether:

Progress2

In many ways I was reassured by the 1/3 of responses that indicated that there were not a fixed number of steps expected each year. Of course, this may mask systems where people hadn’t realised that would become a factor, but interestingly just in asking I also attracted attention from users and producers of tracking systems who both explained that while systems allowed it, they did not compel it. Indeed, some of the “none” responses indicated that although the system had it as an option, their school had chosen not to use it.

So perhaps we’re seeing the start of a change? The Classroom Monitor twitter feed offered a glimmer of hope:

[tweet 603139342882713600 hide_thread=’true’ align=’center’]

It seems that – as is inevitably the case – providers initially created products that matched schools’ desires for something familiar. But perhaps, now, there will be an opportunity to wean schools off such approaches? Perhaps.

But in the meantime, it seems that a lot of schools have replaced a system of points of levels with something that looks alarmingly familiar.

As I’ve said before: to my mind, the Assessment Commission cannot report soon enough. Let’s hope it puts to bed some of the myths that make schools feel compelled to adopt such systems.

In praise of tracking software*

*Not all tracking software will be praised in this blog.

I repeatedly recite my mantra that tracking is not the same as assessment. For years our assessment processes in schools have played second fiddle to the demands of tracking by sub-levels and even sub-sub-levels! The opportunity provided by the scrapping of levels allowed us to move away from that, and I have also been enthusiastic about the use of Key Objectives (or Key Performance Indicators) to record assessment of what has (or has not) been learned, rather than grouping children by score or band.

Whenever I speak to individuals, school or whole authorities, I am always keen to stress the importance of deciding on an assessment model before trying to buy in any tracking software. Putting tracking first is the wrong way to go, in my opinion. And so it was, over the past few months, that I came to be looking for a mode of tracking children’s progress against the Key Objectives using something more advanced than my own simple spreadsheets.

As I’ve said to many people, being clear about our intentions and principles for assessment meant that tracking suppliers had a hard job to promote their tools to us: we knew exactly what we needed and if they couldn’t provide it then we wouldn’t buy.

So it was something of a surprise to stumble across an excellent model from a brief twitter conversation. Matt Britton of Molescroft Primary in Yorkshire posted about their new tracking software (FLiC Assessment) back in February. At first, if I’m honest, I was fairly dismissive as it had been designed to work mainly on tablets. However, within weeks the laptop version was available and I was bowled over.

Two months later and our staff have had their first opportunity to start recording judgements on the software and it’s achieving exactly what we’d hoped.

Some of the key principles I have about assessment are too often not met by packages produced by the big commercial providers. I don’t want children to be lumped into categories like “Beginning 6” or “Secure 4”. These replicate some of the biggest issues with the old system of levels and fail to really record the important detail of assessment.

What I want from tracking software in the first instance is the ability to identify what children can and can’t do, where the gaps are, what interventions are needed and what their next steps might be. Allocating labels obscures all that data. What I like about FLiC is that it is driven by the first principle of recording success against specific objectives.

What I like more is its flexibility. The software comes with over 2000 objectives that could be used to assess children through the primary age range across all subjects. Using the principles of my Key Objectives we’ve already cut that by more than half. It also provides the opportunity to assess each objective at one of up to five different levels; we’ve decided on only three. We even had the choice of what colour our ticks are!

flicNow when teachers want to make assessments for a whole class against a Key Objective, it can be done in as little as one click. We can see at a glance what percentage of children are secure in any given area, or which areas are stronger and weaker in any class, set or cohort. Yet at no point do we have to attach meaningless labels to pupils.

Of course, the purpose of tracking software is to be able to analyse data, and FLiC allows that too. We can compare groups, genders, classes and also date. Need to know if children are making progress within year groups? Simply compare today’s data with that from last term, or September. Need a measure for progress between year end-points? Look at the proportion of children who are securing a given proportion of objectives.

Teachers used to the easy measures of 3 points progress and sub-levels might find the change confusing at first, but what got me excited about using FLiC was exactly that: it doesn’t try to re-create the old discredited system. Rather, it allows schools to select what is important to assess, and for teachers to make judgements in a meaningful way. The tracking element occurs as a result of the assessment, not the other way round.

Once we’d decided we wanted to buy into FLiC, we got it going straight away as already we can see how it can drive our reporting to parents at the end of the year with its printable objectives and assessments.

Now, no software is perfect. At the moment, organising children into class groups has to be done manually, which for my 300+ pupil school was fine, but I think I might have found a chore in my former 800-pupil establishment. Similarly, in the long-run, I’d love to see it produce scaled down reports that we can put in children’s books more regularly. But I could see that happening. As I set up our version of the product, I had quite a few bumps on the road (mostly where I had rushed ahead foolishly), yet they were quickly resolved by technical support from someone in the know, and educational support from Matt Britton.

At the risk of sounding like a paid promotion (all cash bonuses welcome, of course FLiC team!), I couldn’t speak more highly of what FLiC has achieved. I feel like someone has taken what Tim Clarke started with the simple spreadsheet trackers we had, and brought it to life.

Perhaps the most powerful indicator for me was the response from colleagues when introduced to the software. From an understandably hesitant viewpoint at first glance, by the end of the first day of use I had colleagues coming to tell me that they’d used the data output from the programme to identify key areas of focus for teaching next term. And surely that’s what assessment should be all about?

The Gillette problem in Education

When Dave Gorman launched his second series of “Modern Life is Goodish”, he did so with a trailer mocking the increasing number of blades attached to our razors.

The Dave Gorman Razor trailer

The Dave Gorman Razor trailer

The whole thing’s very amusing when it’s dealing with the humdrum of shaving life. But this same inflation appears to be infiltrating our education system as increasingly complex systems of assessment become available. And the DfE are at least in part to blame.

Its recommendations for an end-of-key-stage assessment system are to replace the simple system of 4 main levels of outcome (cunningly named level 3, level 4, level 5 and level 6) with 5 descriptors which seem to cover a narrower range of ability. But to what end? Why do we need to differentiate between children in this way at the age of 11?

The government’s preferred “mastery approach” to teaching suggests that we should be focussing on ensuring that almost all children meet the expected standard – so why the need for a further four categories of attainment (not to mention those that fall below those categories).

The only explanation I can find, is for league tables. Just as I suspect that 5-bladed razors are not significantly more efficient than the old Mach 3, so I rather suspect that 5 descriptors will be no more useful to schools to students than the 3 we used to have just 3 years ago!

Of course, to create league tables you need measures that can produce a whole host of differentiation. And so, up and down the country, schools are losing interest once more in assessment, and returning their focus to tracking – how will we show progress? how will we show when children are making expected progress, and more than expected progress? Because for all their talk of freeing up teachers to focus on what matters, the reality is that the department is only interested in measurable outcomes that can produce graphs to blame predecessors and more to claim improvements.

It’s simple to split children into 5 groups when you have a scaled score system. So what if the chances of scoring 100 or 110 on a test are more to do with the luck of the questions than the underlying ability of the student? It’s easy all the same to say that the child scoring 105 is doing better than the child scoring 100. To heck with the reality.

Can we really honestly say that we can split 11-year-olds into more than 5 measurable groups of writers? Groups which are significantly narrower than our current L3/4/5 bands. The level descriptors manage it through the use of weasel words. We are asked to differentiate between children who “make appropriate choices of grammar and vocabulary to clarify and enhance meaning” and those who “make deliberate choices of grammar and vocabulary to change and enhance meaning“, not to mention the separation of those who make “judicious choices“.

And if we do make such judgements… to what end?

The only possible reason for having so many descriptors, so many imagined levels, is to provide numerical data for league tables. It has nothing to do with teaching and learning (which after all needs a focus on assessment, rather than tracking).  It is only to do with trying to judge schools, and providing room for children to “exceed expected progress”.

And all the time DfE demands it at the end of Key Stages, so tracking software companies will recreate the nonsense for all the intervening years. And so, all the benefits of removing levels are quickly replaced with an increasingly complex, increasingly unreliable and uninformed, set of spreadsheets. No longer is the judgement about one level every 2 years, or even 2 sub-levels each year. No, now we can choose from one of 5 categories every year – or in some cases 6, to ensure that one can be measured each half term.

And if that isn’t enough to persuade you that the Performance Descriptors are no good for anything, then there’s no hope!


If you’re reading this before 5pm on Thursday 18th December, you’ve still got time to log on to the DfE consultation on the descriptors and tell them how awful they are. Please do.

Designing an assessment model the right way

I’ve been prolific in my complaints about schools buying into systems for assessment which focus on tracking rather than assessment, that pander to the myths of levels, or re-introduce burdensome approaches like APP. Every time, quite reasonably, several people ask me via Twitter: What are you doing?

I do my best to reply, but the reality is that what works for my school, is not necessarily right for everyone. That said, I have shared the Key Objectives on which our model is based. However, what I really want to advise people to do, is to access the NAHT materials which set out how to build a really effective model. Unfortunately, while the materials themselves I think are excellent, the problem seems to be that the NAHT has not promoted them, nor made them particularly accessible. So here’s my attempt to do so.

The NAHT framework for assessment

The NAHT model is broadly the same as that which led to my Key Objectives, although notable for its brevity in terms of objectives. There are a few key principles that underpin it, which include:

  • The assessment should link closely to the taught curriculum
  • Not everything that is taught should be assessed (note Dylan Wiliam’s point about this)
  • Key Performance Indicators [KPIs] should be selected for each year group and subject, against which teachers can make assessments.
  • End of year descriptors, based on the KPIs can be used for more summative judgements
  • The whole process should include in-school, and where possible, inter-school moderation

All of these things strike me as very sensible principles. The NAHT team which put together the materials to support this model went to some lengths to point out that schools (or groups of schools) may want to adapt the specifics of what is recorded for tracking purposes, but to support schools in doing so they have also provided examples of Key Performance Indicators for each year group and core subject area. These can be downloaded (rather frustratingly only one at a time!) from the NAHT website – regardless of whether or not you are a member.

The theory, then, is that assessment can take place throughout the year against specific objectives, rather than simply allocating children to meaningless code groups (‘3c’, ‘developing’, ‘mastery’, ‘step 117’, etc.). Over the course of the year, teachers and pupils can see progress being made against specific criteria, and can clearly identify those which still need to be covered. Similarly, at the end of each year, it is possible to make a judgement in relation to the overall descriptor for the year group. Schools may even decide to have a choice of descriptors if they really wished.

Annual tracking of those who are, and are not, meeting the performance standard for the year group can be kept, with intervention targeted appropriately.

There are several advantages of the NAHT system: firstly, it provides a sensible and manageable approach to assessment that can actually be used to support progress as well as meaningful tracking; secondly it doesn’t create unnecessary – or unrealistic – subdivisions or stages to give the impression of progress where none can reasonably be measured. Perhaps importantly, it also provides a ‘safety in numbers’ approach for schools who fear that Ofsted will judge schools on their choice. As a reputable professional organisation, the NAHT is a good backbone for any system – much moreso that relying on creations of data experts, who while clearly invaluable in creating tracking and analysis software, are not necessarily themselves, experts in education.

The aspect which seems to worry colleagues about approaches such as mine and the NAHTs, is that it doesn’t offer easily “measurable” (by which they usually mean track-able) steps all through the year. The fear is – I suspect – that it wouldn’t be possible to ‘prove’ to Ofsted that your assessments were robust if you didn’t have concrete figures to rely on at termly, or six-weekly intervals. Of course, the reality is that such things were nonsense, and it’s important that we recognise this as a profession. The robustness comes from the assessment and moderation approaches, not the labelling. The easy steps approach serves only to obfuscate the actual learning for the benefit of the spreadsheet. We need to move away from that model. Through use of internal and inter-school moderation, we can have confidence in our judgements part-way through a year and can improve our professional understanding of our children’s learning at the same time.

Of course, plenty of software companies will have come up with clever gadgets and numbers and graphs to wow school leaders and governors – that is their job. But the question school leaders should really be asking software companies is not “what are you offering?”, but “what are you building that will match our requirements?”

I notice this week that the latest release of Target Tracker includes an option for filtering to show the NAHT Key Performance Indicators. Infomentor offers a similar option, which also allows schools to link the objectives directly to planning. They also have a setup where schools can opt for my Key Objectives instead if they prefer (which offer slightly more detail). David Pott has already demonstrated how SIMS can be used to track such assessments.

The options are out there, and schools should be looking for tracking systems that fit with good educational principles, not trying to tack the latter on to fit with the tracking system they’ve got.


The NAHT does have a video available which summarises their approach rather well, if in a rather pedestrian manner. Available here: https://www.youtube.com/watch?v=M2aK3Rs2IJQ

The evil offspring of APP

It’s not often I quote the words of education ministers with anything other than disdain, but just occasionally they talk sense. Back in April, Liz Truss explained the ‘freedoms’ being given to schools to lead on assessment between key stages, and commented on the previous system of APP. She described it as an “enormous, cumbersome process” that led to teachers working excessive hours; a system that was “almost beyond satire, […] requiring hours of literal box-ticking“.

Not everybody agreed with the scrapping of levels, but the recent massive response to the Workload Challenge has shown that if there is one thing that teachers are in agreement about, it is the excessive workload in the profession. Now at least we had a chance to get rid of one of those onerous demands on our time.

And yet…

Just this evening I came across two tracking systems that have been produced by private companies and appear to mimic and recreate the administrative burden of APP. What’s more, they seem to have managed to take the previously complex system, and add further levels of detail. Of course, they attempt to argue that this will improve assessment, but our experience tells us that this is not the case.

As Dylan Wiliam quite rightly said in the first principle in his excellent article in Teach Primary magazine:

A school’s assessment system could assess everything students are learning, but then teachers would spend more time assessing than teaching. The important point here is that any assessment system needs to be selective about what gets assessed and what does not…

The problem with the new models which attempt to emulate APP is that they fail in this. They’re trying to add a measure to everything and so suggest that they are more detailed and more useful than ever before. But the reality is that this level of detail is unhelpful: the demands of time outweigh the benefits.

Once again, too many school leaders are confusing assessment with tracking. The idea that if we tick more boxes, then our conclusions will be more precise is foolish. If three sub-levels across a two-year cycle was nonsense, then 3 sub-levels every year can only be worse. Just because the old – now discredited – system allocated point scores each year, doesn’t mean that we should continue to do so.

Assessment is not a simple task. By increasing the volume of judgements required, we reduce teachers’ ability to do it well: we opt for quantity over quality. We end up with flow-charts of how to make judgements, rather than professional dialogue of how to assess learning. We end up with rules for the number of ticks required. As Wiliam also says:

Simplistic rules of thumb like requiring a child to demonstrate something three times to prove they have ‘got it’ are unlikely to be helpful. Here, there is no substitute for professional judgement – provided, of course, ‘professional’ means not just exercising one’s judgement, but also discussing one’s decisions with others

If you’re a headteacher who has brought in a system (or more likely, bought into a system) which implies that progress can be measured as a discrete level (or stage, or step) every term, that asks teachers to assess every single objective of the National Curriculum (or worse, tens of sub-objectives too!), or that prides itself on being akin to APP, then shame on you. There’s no excuse for taking an opportunity where the department itself points out that teachers are being expected to do an unreasonable amount of work, and replacing it with a larger load.

If you’re a teacher in a school that has adopted one of these awful systems, then I can only commiserate. Might I suggest that you print off a copy of this blog, and slide it under your headteacher’s one night. I’d also highly recommend adding Dylan Wiliam’s article to it.

We need our school leaders to lead – not just repeat the mistakes of the past.



Teach Primary magazine

It’s only right that I confess that I write an article for each issue of Teach Primary and so couldn’t fairly be said to be completely impartial. That said, I do think it’s well worth subscribing, if only for gems like Wiliam’s article and others that come up each issue, along with resources, ideas and wisdom from actual teachers and leaders. http://www.teachprimary.com/

Tracking: a need-to-know operation

As schools we’ve become experts in tracking. A whole industry has grown up around it, and you can buy software to create a graph of just about anything. But as I’ve said many times before, there is a big difference between tracking and assessment. Assessment is at the very core of what schools should be about. Tracking, on the other hand, is simply a tool for keeping an eye on things.

A discussion on Twitter tonight – part of the #ukgovchat session – made me particularly aware of our addiction to tracking. Governors were, quite rightly, wondering what they needed to know about how schools are moving to new assessment systems, and whether they ought to insist on keeping levels for an overlapping period.

My contribution was to suggest that governors start from the point of what they actually need to know. Schools now produce far more data than any individual or group of governors could hope to get a grasp of. But the point is, that’s not their role. And here’s the thing – we can track all manner of things, but perhaps we need to take tracking back to a simple system that provides only what we need to know.

So who needs to know what?

Governors

For a typical governing body, there are only a few bits of useful information that can reasonably be monitored. Obviously the end-of-key-stage results are key. RaiseOnline does its thing here and provides more than enough detail for anybody. As for other year groups – the needs are limited. Governors need to have a strategic overview, so for the most part it should be sufficient for them to know what proportion of children are on-track to achieve expected and above-expected outcomes at the end of the Key Stage. This might include break-downs by groups (Pupil Premium, sex, etc.) but the big picture figures are limited to only two or three categories.

School leaders

For the most part, the same data as governors will be sufficient for school leaders. Where more detail is required – perhaps because a particular department, teacher, or group of pupils appears to be under-performing – then further detail may be required, but this should be provided by the teachers with responsibility for those children. For example, if leaders need to know which students are particularly being targeted for accelerated progress, then this should come from the teachers who know them, not from scanning lists of sub-levels. It is these practices that lead to the nonsense of “over-achieving” students then being targeted for further accelerated progress, rather than careful focus on the most needed/worthwhile groups.

Teachers

Teachers have almost no need for tracking. Their focus should be on assessment – relating progress directly to the learning and curriculum, not on broad categories and sub-levels. Inevitably there will be occasions where such assessment is used to inform tracking, but usually at this stage it loses any nuance and detail that would be useful to a classteacher.

Students

Arguably the most important recipients of assessment/tracking information, making it all the more shocking that I forgot these ‘stakeholders’ initially (see comments). Students have a keen interest in their progress, and should be supported to understand their attainment and targets. However, as I have said many times before, sub-levels did not achieve that. Students have a right to clarity about what they are doing well, and specific areas for improvement; that comes back to high quality assessment in the classroom. They may also be interested in their tracking data – knowing whether or not they’re on-track to meet expected (or higher) levels, but these should be secondary to the specifics of assessment.

Ofsted

Ofsted don’t need to know anything of in-school data. They get plenty of detail in Raise, and merely need to satisfy themselves that the school leaders have a good grip on the progress of students in other year groups to allow them to ensure that all students make appropriate progress. As the organisation itself told us this week: inspectors should not expect performance- and pupil-tracking data to be
presented in a particular format.

Tracking ≠ Assessment

None of this implies that no further detail is required at all. An essential part of the teacher’s job is to ensure that children are making progress and that teaching is targeted to close gaps and raise attainment for all. But none of that is linked to tracking, and we’d all do well to remember that!

Tracking Grids for Key Objectives

After much discussion in the last week, particularly with experts in the data field, I have tried to adapt the Key Objective spreadsheets put together by Tim Clarke to allow:

  • one document to contain all the tracking for a single class for the core subjects
  • a quick summary of the numbers/percentage of students meeting the expected standard

So far I have put together documents for Years 1 to 6. Each spreadsheet contains a page for each of Reading, Writing, Maths and Science, with the objectives listed. By entering the names along the top row, teachers can then enter 1, 2 or 3 against each objective to indicate that students are working towards / meeting / exceeding that specific objective. These cells automatically change colour for quick visual representation.

y1spreadsheet

In addition, at the foot of the page, a simple summary indicates whether students are working towards, meeting or exceeding the expected level for their age. It also provides a quick count/percentage summary of the whole class.

On the final page of the spreadsheet, a whole curriculum overview is also available, which also shows the percentages of students on track to meet or exceed the expected level for their age.

y1spreadsheetsummary

The challenge at this stage is in setting the appropriate thresholds to determine the categories of attainment (as well as the names of those categories) as different schools are likely to want to try different approaches, at least initially. Consequently, I have also included a settings page which allows schools to adjust the specific percentages of objectives that need to be met/exceeded to be awarded the overall grade. It also allow for those categories to be renamed to suit a school’s model:

y1spreadsheetsettings

Finally, the spreadsheet also allows for the values to be adjusted for each term. This means that schools can select the standard 85% threshold for ‘meeting the expected standard’, but have this automatically adjusted by thirds to allow for the fact that fewer objectives will have been taught in the Autumn and Spring Term. Thus, by selected the Autumn Term and requiring 85%, the spreadsheet will automatically adjust to 28% to assess progress up to that point.

I don’t imagine that this will become a staple in hundreds of schools nationally – there are far better-equipped companies to introduce such schemes. However, hopefully it does give an indication of how the Key Objective model (supported by the NAHT) could work in practice.

There is still an issue of tracking progress across year groups, which isn’t accommodated by this spreadsheet. One solution would be to record a simple percentage score for a student each year (e.g. George Gershwin has achieved 88% of Y1 objectives). Progress over time could then be measured by comparing the annual achieved percentage. However it would be important to separate this from the assessment process. After all: Tracking ≠ Assessment

Downloads

The full package of spreadsheets and accompanying documents can be downloaded from here. Please do have a play around with them, and highlight any errors you spot or improvements you’d recommend. And maybe have them to hand next time a supplier tries to get you to buy their product, and make sure that their offer is significantly better!

Download full Assessment and Tracking Resource Pack by clicking here

Sample documents:

Year 5 Tracking Document

Year 6 Tracking Document

Whose data is it anyway?

I caused a bit of an upset today. As too easily happens, I saw a conversation via Twitter that raised concerns with me, and I rushed in with 140 characters of ill-thought-through response.

Some very knowledgeable experts in the field of school data management were trying – quite understandably – to get their heads round how a life after levels will look in terms of managing data and tracking in schools. As David Pott (@NoMoreLevels) put it: “trying to translate complex ideas into useable systems”.

My concern is that in too many cases, data experts are being forced to try to find their own way through all this, without the expert guidance of school leaders (or perhaps more importantly, system leaders) to highlight the pitfalls, and guide the direction of future developments. That’s not to say that the experts are working blind, but rather that they are being forced to try to work out a whole system of which they are only a part.

Of course, the problem is that without first-hand knowledge of some of those areas, the data experts are forced to rely on their knowledge of what went before. And as seems to be the case in so many situations at the moment, we run the risk of creating a system that simply mirrors the old one, flaws and all. We need to step back and look at the systems we actually need to help our schools to work better in the future. And as with all good design projects, it pays to consider the needs of the end user. Inevitably, with school data, there are always too many users!

Therefore, here is my attempt – very much from a primary perspective, although I daresay there are many parallels in secondary – to consider who the users are of data and tracking, and what their needs might be in our brave new world.

The Classroom Teacher

This is the person who should be at the centre of all discussions about data collection. If it doesn’t end up linking back to action in the classroom, then it is merely graph-plotters plotting graphs.

In the past, the sub-level has been the lot of the classroom teacher. Those meaningless subdivisions which tell us virtually nothing about the progress of students, but everything about the way in which data has come to drive the system.

As a classroom teacher, I need to know two things: which children in my class can do ‘X’, and which cannot? Everything else I deal with is about teaching and learning, be that curriculum, lesson planning, marking & feedback, everything. My involvement in the data system should be about assessment, not tracking. I have spoken many times about this: Tracking ≠ Assessment

Of course, at key points, my assessment should feed into the tracking system, otherwise we will find ourselves creating more work, but whether that be termly, half-termly or every fortnight, the collection of data for tracking should be based on my existing records for assessment, not in addition to it.

We have been fed a myth that teachers need to “know their data” to help their students make progress. This is, of course, nonsense. Knowing your data is meaningless if you don’t know the assessments that underpin it. Knowing that James is a 4b tells you nothing about what he needs to do to reach a 4a. A teacher needs to know their assessments: whether or not James knows his tables, or can carry out column subtraction, or understands how to use speech marks. None of this is encapsulated in the data; it is obscured by it.

My proposal is that classroom teachers use a Key Objectives model for assessing against specific objectives. Pleasingly, the NAHT appear to agree with me.

Students

Children do not need to know where they are on a relative scale compared to their peers, or to other schools nationally. What matters to children in classrooms is that they know what they can do, what they need to do next, and how to do that. All of that comes directly from teachers’ assessments, and should have no bearing on data and tracking (or perhaps, more importantly, the methods of tracking should have no bearing on a child’s understanding of their own attainment).

Too many schools have taken the message about students knowing where they are and what to do next as an indication that they should be told their sub-level. This doesn’t tell children anything about where they are, and much less about what to do next.

The School Leader

As a department, year team or senior leader, it is very rarely feasible for any one person to have a handle on the assessment outcomes for individual students; that is not their role.

This is the level at which regular tracking becomes important. It makes sense for a tracking system to highlight the numbers of children in any class group who are on-track – however that might be measured. It might also highlight those who are below expectations, those who are above, or those who have made slower progress. It should be possible, again, for all of this to come from the original assessments made by teachers in collated form.

For example, if using the Key Objectives approach, collation might indicate that in one class after half a term, 85% of students have achieved at least 20% of the key objectives, while a further 10% have achieved only 15% of the objectives, and some 5% are showing as achieving less than that. This would highlight the groups of children who are falling behind. It might be appropriate to “label” groups who are meeting, exceeding, or falling below the expected level but this is not a publication matter. It is for school tracking. There is nothing uncovered here that a classroom teacher doesn’t already know from his/her assessments. There is nothing demonstrated here that impacts on teaching and learning in classrooms. It may, however, highlight system concerns, for example where one class is underperforming, or where sub-groups such as those receiving the pupil premium are underperforming. Once these are identified, the focus should move back to the assessment.

In the past, the temptation was to highlight the percentage of children achieving, say L4, and to set then a target to increase that percentage, without any consideration of why those children were not yet achieving the level. All of these targets and statements must come back to the assessment and the classroom teacher.

Of course, senior leaders will also want to know the number of children who are “on-track” to meet end-of-key-stage expectations. Again, it should be possible to collate this based on the assessment processes undertaken in the classroom.

What is *not* required, is a new levelling system. There is no advantage to new labels to replace the old levels. There is no need for a “3b” or “3.5” or any other indicator to show that a student is working at the expected level for Year 3. Nobody needs this information. We have seen how meaningless such subdivisions become.

Of course, the devil is in the detail. What percentage of objectives would need to be met to consider a child to be “on track” or working at “age-related expectations”? Those are professional questions, and it is for that reason that it is all the more important that school and system leaders are driving these discussions, rather than waiting for data experts to provide ready-made solutions.

Ofsted

Frankly, we shouldn’t really need to consider Ofsted as a user of data, but the reality is that we currently still do. That said, their needs should be no different from school leaders. They will already have the headline data for end-of-key-stage assessments. All they should need to know from internal tracking and assessment is:

  1. Is the school appropriately assessing progress to further guide teaching and learning?
  2. Is the school appropriately tracking progress to identify students who need further support or challenge?

The details of the systems should be of no concern of Ofsted, so long as schools can satisfy those two needs. There should be no requirement to produce the data in any set form or at any specific frequency. The demands in the past that schools produce half-termly (or more frequent!) tracking spreadsheets of levels cannot be allowed to return under the new post-levels systems.

Parents

Parents were clearly always the lost party in the old system, and whether or not you agree with the DfE’s assessment that parents found levels confusing, the reality is that the old system was obscure at best. It told parents only where their child was in a rough approximation of comparison to other students. It gave no indication of the skills their child had, or their gaps in learning.

For the most part, the information a parent needs about their child’s learning is much the same as that that their child needs: the knowledge of what they can and can’t do, and what their next steps are. Of course, parents may be interested in a child’s attainment relative to his/her age, and that ought to be evident from the assessment. Equally, they may like to see how they have progressed, and again assessment against key objectives demonstrates that amply.

 

So where next?

We are fortunate in English schools to be supported by so many data experts with experience of the school system. However, they should not – indeed they must not  – be left to try to sort out this sorry mess alone. School leaders and school system leaders need to take a lead in this. Schools and their leaders need to take control of the professional discussions about what we measure when we’re assessing, and about what we consider to be appropriate attainment based on those assessments. Only then can the data experts who support our schools really create the systems we need to deliver on those intentions.