Tag Archives: assessment

What does the expected standard look like?

The DfE won’t have complete performance descriptors available until September, but today they did release the test frameworks for the creators of the National Curriculum tests. And like it or not, we know that the tests will be the all-important markers of attainment at the end of KS2, so the content of the performance descriptors for the tests is important.

So, what will children be expected to be able to do in order to reach the “expected standard” in the KS2 tests? Perhaps it’s easier to pick out some of the content which is not likely to be required. The performance descriptors within the test framework (which are expressly not intended for teacher assessment) outline the “typical characteristics of pupils whose performance in the key stage 2 tests is at the threshold of the expected standard”

Notably for KS2 maths, the performance descriptor doesn’t contain any mention of the following elements of the Year 5/6 mathematics curriculum:

  • Use the rules of order of operations
  • Identify prime numbers (other than knowing those up to 20)
  • Multiply/divide fractions*
  • Convert between metric and imperial units
  • Calculate the area of parallelograms and triangles
  • Calculate, estimate and compare volume of cuboids
  • Illustrate and name parts of circles
  • Recognise vertically opposite angles
  • Recognise and use cube numbers
  • Read Roman numerals (other than on clocks)

That’s not to say that none of these things will come up on the tests. Simply that they don’t feature in the performance descriptor for threshold of “the expected standard”. Presumably, therefore, such things are indicative of students working at a higher level than just meeting the expected standard.

This isn’t to suggest that such things needn’t be taught. Far from it. But it’s worth knowing that in that unmanageable list of things that need to be covered by the end of KS2 (particularly challenging for the current Y5s who’ve had less than a year on the new curriculum), there are some which are perhaps marginally less vital than others!

It’s interesting to make a comparison to the performance descriptor which appears in the Grammar, Punctuation and Spelling test framework. Here virtually the entirety of the Year 6 curriculum content features in the ‘expected standard’ descriptor. The only things I can see that are not include are the subjunctive form, and the use of brackets. It doesn’t seem to leave a lot of room for higher attainers to have any room to demonstrate their additional skill.

It’s interesting. I make no promises though. What you do with this detail is entirely up to you, and I accept no responsibilty. If you want to compare the criteria yourself, the National Curriculum content can be found at www.primarycurriculum.me.uk, and the test frameworks at www.gov.uk/dfe


* The performance descriptor for the tests does include the suggestion that pupils should be “becoming more confident with more complex fraction calculations” – without defining what this means. 

Advertisements

When is a question not a question?

New test frameworks were published on the GOV.UK website today, setting out the requirements for National Curriculum tests from 2016 onwards. And they contain a couple of surprises to those of us who consider ourselves to be speakers of the English language.

For it appears that no longer is a question defined as “A sentence worded or expressed so as to elicit information”, as your old-fashioned ‘dictionary’ might imagine, but rather as a specifically-structured sentence that meets the need of writers of tests.

“Nonsense!!” you might exclaim. Except that’s not an exclamation either.

“Not an exclamation?” you might ask. But you’d be wrong to do so, since that is no longer a question.

For it seems that the DfE have deemed that exclamations must begin with the word “How” or “What”. So while your dictionary might think that an exclamation is a sudden remark or cry; while the world at large might include “Hello!”, or “Utter nonsense” in this group, it seems that we are collecting in error.

exclamations

So rather than teaching children the real meaning of the word, or bothering the good people at Oxford dictionaries with your queries, remember now that exclamations begin with “how” or “what”. No further questions necessary. (And don’t ask what type of sentence that last one was: it doesn’t exist!)

And as for question… if you thought that intent was what made a question, you’re quite wrong. Questions are only formed in one of three ways.questions

So if you were thinking of teaching any meaningful understand of what a question is, stop yourself right away. That is not your place. Your role is teach the testable definitions. Now, behave!


If you want to put yourself through reading the test frameworks, you can find them after much hunting on the DfE part of the gov.uk website:

https://www.gov.uk/government/collections/national-curriculum-assessments-test-frameworks

Plus ça change… plus c’est la même chose

The French phrase seems entirely fitting when talking about tackling ‘assessment without levels’. Increasingly it has become clear that having seen levels clearly rejected by experts like Tim Oates and even the DfE themselves, most schools have found themselves re-creating a system in its image. And so it was that I set out to survey a not-entirely-scientific group of twitter users about their tracking systems.

In fact, I was disappointed to be pleasantly surprised by the results of my little poll. Firstly, the easy bit – what tracking programs are schools using? Obviously, on a relatively small sample (325) taken from a poll on Twitter, this isn’t entirely representative, but may be indicative:

TrackersIt’s clear that there are some very popular products, but interesting to see that 6% of responding schools had designed their own system, and over 10% had no system at all. It isn’t clear, of course, whether this 10% have made a decision not to buy something in, or simply haven’t decided which other product to purchase yet.

Removing those who had indicated that they had no tracking system, I then looked with interest at the progress measures used. My fear has been that most schools would have replaced the old system of 1½ sub-level / 3 APS points a year with something very similar. It was for that reason I was so pleasantly surprised that the most popular response from the survey was that systems required no set measure of steps each year. However, those are closely followed by the 3- and 6-step models:

Progress

In fact, when I looked more closely, it soon became clear that steps have remainder the dominant model, and the familiar ‘one step per term/half-term’ approach remains the most popular. In fact, this approach accounts for almost half of those who gave an answer, with steps models making up around 2/3 of responses altogether:

Progress2

In many ways I was reassured by the 1/3 of responses that indicated that there were not a fixed number of steps expected each year. Of course, this may mask systems where people hadn’t realised that would become a factor, but interestingly just in asking I also attracted attention from users and producers of tracking systems who both explained that while systems allowed it, they did not compel it. Indeed, some of the “none” responses indicated that although the system had it as an option, their school had chosen not to use it.

So perhaps we’re seeing the start of a change? The Classroom Monitor twitter feed offered a glimmer of hope:

[tweet 603139342882713600 hide_thread=’true’ align=’center’]

It seems that – as is inevitably the case – providers initially created products that matched schools’ desires for something familiar. But perhaps, now, there will be an opportunity to wean schools off such approaches? Perhaps.

But in the meantime, it seems that a lot of schools have replaced a system of points of levels with something that looks alarmingly familiar.

As I’ve said before: to my mind, the Assessment Commission cannot report soon enough. Let’s hope it puts to bed some of the myths that make schools feel compelled to adopt such systems.

Dear Parents…

Dear Parents,

When you receive your child’s report this year, things might not look as clear as they once did. Having spent years getting your head around levels and sub-levels, I’m afraid they are no more. And as much as this might come as a shock to you, believe me, we as a profession were no more prepared for it.

It comes at a time when – as you’ll know – so much else has changed in our schools. Teachers the length and the breadth of the country have been doing our utmost to provide the smoothest and most effective transition for your child as we move from one national curriculum to another, but it hasn’t been easy.

It means that when you receive the report on the attainment of your child at the end of this academic year, the picture may look very different from the past. Children who were comfortably on track for their age will suddenly and unexpectedly appear to be falling behind. Those who were flying high may seem no longer to be.

Your child’s school may well try to explain this in its covering letter. Please be reassured that they are not simply covering their backs, or trying to paper over cracks. The reality is that the goalposts have moved so significantly that it has been impossible to keep on track. Your child may well have made excellent progress this year, and yet still be showing as not yet attaining the required standard.

Treat that with the caution it deserves.

Let me illustrate with an example. In the past, KS2 children who were achieving well in maths might have explored the notion of probability, allocating fractions to likelihoods of events and working out the chance of things occurring. All of that work is now ignored: the new curriculum does not include it, and so the attainment scores will not recognise it. That your child may well have excellent knowledge and skills in this area would count for nothing.

Instead, those same children are now expected quickly to fit in three years’ worth of fractions work that never previously existed. Content that was previously covered in Year 7 and 8, is suddenly now expected of our 10-year-olds. The issue is repeated for aspects across the subjects, and age ranges.

Be reassured too, that as a profession we don’t warn you of these things because we have low expectations or don’t want to strive for these new challenging goals. Already schools are doing their utmost to fill those gaps, to adjust their curricula, to provide the extra direction and support pupils need. But Rome wasn’t built in a day. And similarly, a four-year Programme of Study cannot be covered in 30 weeks.

In time, all of our children will work through the national curriculum at the expected rate, and numbers  of children working at the expected standard will rise. This won’t be a reflection of some brilliant work achieved by the government, but rather of teachers adjusting what they teach to meet the new requirements.

So apologies, parents. We recognise that it’s confusing, indeed worrying in cases. We’ve been confused and worried too. Doubtless your child’s teacher will be able to reassure you of the progress they have made this year, and their school will be able to explain how they’ve set out to change things to meet the new requirements.

But this year more than ever, I’d urge you not to panic when you see the score, or tick-box, or highlighted grade. Take time to read the paragraphs so carefully drafted by your child’s teacher that highlight what your child has achieved and where they need to go next.

There is no need to presume that anyone has failed your child. As ever, teachers will be doing the best to provide the best possible education within the parameters set by the government. If you have worries, then of course, ask. As a profession we don’t yet have all the answers (we’re still waiting, too!) But the teachers who work with your child know much more about them than any grade, score or tick-box will ever tell you.

So read the report, take note of the assessments, but most importantly, think back to how your child has grown this year, and what they now know and can do that is new to them and you. And share your pride with them of what they have achieved.

Let us do the worrying about how we pull together the curriculum to meet their needs: we promise – we’re experts at it.



Teachers tackling the new curriculum and its assessment may find my free resources useful.

In praise of tracking software*

*Not all tracking software will be praised in this blog.

I repeatedly recite my mantra that tracking is not the same as assessment. For years our assessment processes in schools have played second fiddle to the demands of tracking by sub-levels and even sub-sub-levels! The opportunity provided by the scrapping of levels allowed us to move away from that, and I have also been enthusiastic about the use of Key Objectives (or Key Performance Indicators) to record assessment of what has (or has not) been learned, rather than grouping children by score or band.

Whenever I speak to individuals, school or whole authorities, I am always keen to stress the importance of deciding on an assessment model before trying to buy in any tracking software. Putting tracking first is the wrong way to go, in my opinion. And so it was, over the past few months, that I came to be looking for a mode of tracking children’s progress against the Key Objectives using something more advanced than my own simple spreadsheets.

As I’ve said to many people, being clear about our intentions and principles for assessment meant that tracking suppliers had a hard job to promote their tools to us: we knew exactly what we needed and if they couldn’t provide it then we wouldn’t buy.

So it was something of a surprise to stumble across an excellent model from a brief twitter conversation. Matt Britton of Molescroft Primary in Yorkshire posted about their new tracking software (FLiC Assessment) back in February. At first, if I’m honest, I was fairly dismissive as it had been designed to work mainly on tablets. However, within weeks the laptop version was available and I was bowled over.

Two months later and our staff have had their first opportunity to start recording judgements on the software and it’s achieving exactly what we’d hoped.

Some of the key principles I have about assessment are too often not met by packages produced by the big commercial providers. I don’t want children to be lumped into categories like “Beginning 6” or “Secure 4”. These replicate some of the biggest issues with the old system of levels and fail to really record the important detail of assessment.

What I want from tracking software in the first instance is the ability to identify what children can and can’t do, where the gaps are, what interventions are needed and what their next steps might be. Allocating labels obscures all that data. What I like about FLiC is that it is driven by the first principle of recording success against specific objectives.

What I like more is its flexibility. The software comes with over 2000 objectives that could be used to assess children through the primary age range across all subjects. Using the principles of my Key Objectives we’ve already cut that by more than half. It also provides the opportunity to assess each objective at one of up to five different levels; we’ve decided on only three. We even had the choice of what colour our ticks are!

flicNow when teachers want to make assessments for a whole class against a Key Objective, it can be done in as little as one click. We can see at a glance what percentage of children are secure in any given area, or which areas are stronger and weaker in any class, set or cohort. Yet at no point do we have to attach meaningless labels to pupils.

Of course, the purpose of tracking software is to be able to analyse data, and FLiC allows that too. We can compare groups, genders, classes and also date. Need to know if children are making progress within year groups? Simply compare today’s data with that from last term, or September. Need a measure for progress between year end-points? Look at the proportion of children who are securing a given proportion of objectives.

Teachers used to the easy measures of 3 points progress and sub-levels might find the change confusing at first, but what got me excited about using FLiC was exactly that: it doesn’t try to re-create the old discredited system. Rather, it allows schools to select what is important to assess, and for teachers to make judgements in a meaningful way. The tracking element occurs as a result of the assessment, not the other way round.

Once we’d decided we wanted to buy into FLiC, we got it going straight away as already we can see how it can drive our reporting to parents at the end of the year with its printable objectives and assessments.

Now, no software is perfect. At the moment, organising children into class groups has to be done manually, which for my 300+ pupil school was fine, but I think I might have found a chore in my former 800-pupil establishment. Similarly, in the long-run, I’d love to see it produce scaled down reports that we can put in children’s books more regularly. But I could see that happening. As I set up our version of the product, I had quite a few bumps on the road (mostly where I had rushed ahead foolishly), yet they were quickly resolved by technical support from someone in the know, and educational support from Matt Britton.

At the risk of sounding like a paid promotion (all cash bonuses welcome, of course FLiC team!), I couldn’t speak more highly of what FLiC has achieved. I feel like someone has taken what Tim Clarke started with the simple spreadsheet trackers we had, and brought it to life.

Perhaps the most powerful indicator for me was the response from colleagues when introduced to the software. From an understandably hesitant viewpoint at first glance, by the end of the first day of use I had colleagues coming to tell me that they’d used the data output from the programme to identify key areas of focus for teaching next term. And surely that’s what assessment should be all about?

Performance Descriptors on hold?

I’m not known for my generosity towards the department, but let me state straight off that I’m impressed that it has had the courage to do what’s right in respect of the Performance Descriptors.

After a lacklustre start, eventually 880 responses to the consultation were received, and the message was overwhelming. At least three-quarters of responses said they were unclear or confusing, inappropriately spaced and difficult to understand. Even the free-text response box – the one likely to be left empty – led to around 300 people complaining that they were not fit for purpose.

But we feared the worst. As Warwick Mansell reported in the Guardian last month, we knew that there were doubts about the descriptors, but the worry was that they might be pushed through anyway. So it should be cautiously welcomed that the descriptors will not be rolled out in their current form – at least, not now.

Of course, the matter remains of what is to be done. And there I still have doubts.

We had the announcement yesterday of a commission to support schools with assessment after levels. There are a few questions here. Firstly, it isn’t clear whether the commission is intended to look at primary assessment, or both primary and secondary. The press release title suggests the former, other comments the latter.

Secondly, who is to be on this commission? Nick Gibb described it as a teacher-led commission, but the only appointment so far publicised is a former secondary headteacher, and one who’s been retired for 8 years at that! I don’t hold with the view that headteachers are not teachers, but it’s certainly fair to say that very few headteachers deal with the day-to-day business of assessment in the classroom. If the commission is made up of apparent cronies – or worse, remains secretive as so often in the past, it will be hard to persuade teachers that sensible decisions are being made.

Thirdly, how does the commission’s work fit in with the “assessment experts” who will advise the department on how to move forward with teacher assessment at the end of each primary Key Stage? And who are those experts to be? Will they be the same ones who wrote the flawed descriptors in the first place?

Alongside this, the government response appears to suggest in places that the problems with Performance Descriptors are due not to the failings in the descriptors themselves, but in teachers’ understanding of them. That is unequivocally not the case. The descriptors were confused, unhelpful and a genuine obstruction to good assessment and teaching – going some way to contradict the government’s intention to avoid excessive pace in schools. I would like to see confirmation that the performance descriptors in their current form will definitely not be implemented.

Another issue was raised today by @GiftedPhoenix, relating to the fairly recent proposal from the Workload Challenge project to ensure longer lead-in times for major changes:

[tweet https://twitter.com/GiftedPhoenix/status/570907293908393984 hide_thread=true]

So, the department is not off the hook yet.

But let me say again: they ought to be congratulated for at least having the courage to take a foot off the pedal. Few things guarantee error and difficulty more than haste. We need something sorted as soon as possible – but no sooner!

The Gillette problem in Education

When Dave Gorman launched his second series of “Modern Life is Goodish”, he did so with a trailer mocking the increasing number of blades attached to our razors.

The Dave Gorman Razor trailer

The Dave Gorman Razor trailer

The whole thing’s very amusing when it’s dealing with the humdrum of shaving life. But this same inflation appears to be infiltrating our education system as increasingly complex systems of assessment become available. And the DfE are at least in part to blame.

Its recommendations for an end-of-key-stage assessment system are to replace the simple system of 4 main levels of outcome (cunningly named level 3, level 4, level 5 and level 6) with 5 descriptors which seem to cover a narrower range of ability. But to what end? Why do we need to differentiate between children in this way at the age of 11?

The government’s preferred “mastery approach” to teaching suggests that we should be focussing on ensuring that almost all children meet the expected standard – so why the need for a further four categories of attainment (not to mention those that fall below those categories).

The only explanation I can find, is for league tables. Just as I suspect that 5-bladed razors are not significantly more efficient than the old Mach 3, so I rather suspect that 5 descriptors will be no more useful to schools to students than the 3 we used to have just 3 years ago!

Of course, to create league tables you need measures that can produce a whole host of differentiation. And so, up and down the country, schools are losing interest once more in assessment, and returning their focus to tracking – how will we show progress? how will we show when children are making expected progress, and more than expected progress? Because for all their talk of freeing up teachers to focus on what matters, the reality is that the department is only interested in measurable outcomes that can produce graphs to blame predecessors and more to claim improvements.

It’s simple to split children into 5 groups when you have a scaled score system. So what if the chances of scoring 100 or 110 on a test are more to do with the luck of the questions than the underlying ability of the student? It’s easy all the same to say that the child scoring 105 is doing better than the child scoring 100. To heck with the reality.

Can we really honestly say that we can split 11-year-olds into more than 5 measurable groups of writers? Groups which are significantly narrower than our current L3/4/5 bands. The level descriptors manage it through the use of weasel words. We are asked to differentiate between children who “make appropriate choices of grammar and vocabulary to clarify and enhance meaning” and those who “make deliberate choices of grammar and vocabulary to change and enhance meaning“, not to mention the separation of those who make “judicious choices“.

And if we do make such judgements… to what end?

The only possible reason for having so many descriptors, so many imagined levels, is to provide numerical data for league tables. It has nothing to do with teaching and learning (which after all needs a focus on assessment, rather than tracking).  It is only to do with trying to judge schools, and providing room for children to “exceed expected progress”.

And all the time DfE demands it at the end of Key Stages, so tracking software companies will recreate the nonsense for all the intervening years. And so, all the benefits of removing levels are quickly replaced with an increasingly complex, increasingly unreliable and uninformed, set of spreadsheets. No longer is the judgement about one level every 2 years, or even 2 sub-levels each year. No, now we can choose from one of 5 categories every year – or in some cases 6, to ensure that one can be measured each half term.

And if that isn’t enough to persuade you that the Performance Descriptors are no good for anything, then there’s no hope!


If you’re reading this before 5pm on Thursday 18th December, you’ve still got time to log on to the DfE consultation on the descriptors and tell them how awful they are. Please do.

Primary Assessment: where are we so far?

In a little over a month, I’ll be speaking at a conference in London about primary assessment. I have previously spoken at such an event back in May/June of this year, and at the time shared the stage with a representative from the DfE who seemed to have as little clue as the rest of us about what was likely to happen.

Things move quickly in education, particularly under the current government, so much has emerged since then, yet there are still plenty of unknowns and plenty of areas of uncertainty. It’s for that reason that I’ll look forward to attending the same conference to hear from other experts in the field, and to see how other schools are tackling the challenges of our situation.

Since the summer, we have learned a good deal more about the nature of the test, as well as something more about teacher assessment. There remain many unanswered questions, particularly about how Teacher Assessment will work, and I’m hoping that the DfE representative might be able to shed some light on that matter. I’m also fascinated to hear from Ofsted about what they say they’ll be looking for in the systems that schools use.

What we do know, perhaps more clearly than ever, is that schools are being left to ‘go it alone’ when it comes to internal assessment. Of course, schools were always free to do so, but the levels system became all-but-universal. Now, schools are working individually, in partnerships, alliances and chains to create their own systems of tracking progress and recording assessments to support their judgements during each key stage.

What seems to matter more than ever is that schools collaborate on this. Whether that be with other schools in their locality, or through ‘buying in’ a shared system which provides a sense of moderation as back-up, schools need to be aware of what others are doing more than ever. Rather than looking to the DfE for a preferred model, or the required approach, schools should be looking at what is available in the ‘marketplace’, and making a choice that suits their requirements. As Dylan Wiliam said in his recent Teach Primary article – simple off-the-peg solutions may no longer be good enough.

Of course, that’s why at the conference I’ll be talking about my own, adaptable, free model of Key Objectives and the accompanying tracking documents. I’ll also be talking about how I think mastery approaches can support the combination of assessment with planning and teaching. That’s not because I think I have all the answers: I don’t think anybody does any more. I think all we can do is share what we know and find what works for us, within the confines of the system we have.

It’s a difficult time for school leaders to know where to turn and what to use, but it’s also an opportunity for us to really take a grip of how assessment works in our schools and to make it work for the benefit of our students, rather than for the producers of graphs.


The conference at which I will be speaking is the Optimus Effective Primary Assessment under the new National Curriculum conference in London on Thursday 29th January. More details are available at http://www.optimus-education.com/conferences/assessment15

Readers of my blog who would like to attend can receive 20% off the standard rate if they use the promotional code MT15 when booking online.

Designing an assessment model the right way

I’ve been prolific in my complaints about schools buying into systems for assessment which focus on tracking rather than assessment, that pander to the myths of levels, or re-introduce burdensome approaches like APP. Every time, quite reasonably, several people ask me via Twitter: What are you doing?

I do my best to reply, but the reality is that what works for my school, is not necessarily right for everyone. That said, I have shared the Key Objectives on which our model is based. However, what I really want to advise people to do, is to access the NAHT materials which set out how to build a really effective model. Unfortunately, while the materials themselves I think are excellent, the problem seems to be that the NAHT has not promoted them, nor made them particularly accessible. So here’s my attempt to do so.

The NAHT framework for assessment

The NAHT model is broadly the same as that which led to my Key Objectives, although notable for its brevity in terms of objectives. There are a few key principles that underpin it, which include:

  • The assessment should link closely to the taught curriculum
  • Not everything that is taught should be assessed (note Dylan Wiliam’s point about this)
  • Key Performance Indicators [KPIs] should be selected for each year group and subject, against which teachers can make assessments.
  • End of year descriptors, based on the KPIs can be used for more summative judgements
  • The whole process should include in-school, and where possible, inter-school moderation

All of these things strike me as very sensible principles. The NAHT team which put together the materials to support this model went to some lengths to point out that schools (or groups of schools) may want to adapt the specifics of what is recorded for tracking purposes, but to support schools in doing so they have also provided examples of Key Performance Indicators for each year group and core subject area. These can be downloaded (rather frustratingly only one at a time!) from the NAHT website – regardless of whether or not you are a member.

The theory, then, is that assessment can take place throughout the year against specific objectives, rather than simply allocating children to meaningless code groups (‘3c’, ‘developing’, ‘mastery’, ‘step 117’, etc.). Over the course of the year, teachers and pupils can see progress being made against specific criteria, and can clearly identify those which still need to be covered. Similarly, at the end of each year, it is possible to make a judgement in relation to the overall descriptor for the year group. Schools may even decide to have a choice of descriptors if they really wished.

Annual tracking of those who are, and are not, meeting the performance standard for the year group can be kept, with intervention targeted appropriately.

There are several advantages of the NAHT system: firstly, it provides a sensible and manageable approach to assessment that can actually be used to support progress as well as meaningful tracking; secondly it doesn’t create unnecessary – or unrealistic – subdivisions or stages to give the impression of progress where none can reasonably be measured. Perhaps importantly, it also provides a ‘safety in numbers’ approach for schools who fear that Ofsted will judge schools on their choice. As a reputable professional organisation, the NAHT is a good backbone for any system – much moreso that relying on creations of data experts, who while clearly invaluable in creating tracking and analysis software, are not necessarily themselves, experts in education.

The aspect which seems to worry colleagues about approaches such as mine and the NAHTs, is that it doesn’t offer easily “measurable” (by which they usually mean track-able) steps all through the year. The fear is – I suspect – that it wouldn’t be possible to ‘prove’ to Ofsted that your assessments were robust if you didn’t have concrete figures to rely on at termly, or six-weekly intervals. Of course, the reality is that such things were nonsense, and it’s important that we recognise this as a profession. The robustness comes from the assessment and moderation approaches, not the labelling. The easy steps approach serves only to obfuscate the actual learning for the benefit of the spreadsheet. We need to move away from that model. Through use of internal and inter-school moderation, we can have confidence in our judgements part-way through a year and can improve our professional understanding of our children’s learning at the same time.

Of course, plenty of software companies will have come up with clever gadgets and numbers and graphs to wow school leaders and governors – that is their job. But the question school leaders should really be asking software companies is not “what are you offering?”, but “what are you building that will match our requirements?”

I notice this week that the latest release of Target Tracker includes an option for filtering to show the NAHT Key Performance Indicators. Infomentor offers a similar option, which also allows schools to link the objectives directly to planning. They also have a setup where schools can opt for my Key Objectives instead if they prefer (which offer slightly more detail). David Pott has already demonstrated how SIMS can be used to track such assessments.

The options are out there, and schools should be looking for tracking systems that fit with good educational principles, not trying to tack the latter on to fit with the tracking system they’ve got.


The NAHT does have a video available which summarises their approach rather well, if in a rather pedestrian manner. Available here: https://www.youtube.com/watch?v=M2aK3Rs2IJQ

The evil offspring of APP

It’s not often I quote the words of education ministers with anything other than disdain, but just occasionally they talk sense. Back in April, Liz Truss explained the ‘freedoms’ being given to schools to lead on assessment between key stages, and commented on the previous system of APP. She described it as an “enormous, cumbersome process” that led to teachers working excessive hours; a system that was “almost beyond satire, […] requiring hours of literal box-ticking“.

Not everybody agreed with the scrapping of levels, but the recent massive response to the Workload Challenge has shown that if there is one thing that teachers are in agreement about, it is the excessive workload in the profession. Now at least we had a chance to get rid of one of those onerous demands on our time.

And yet…

Just this evening I came across two tracking systems that have been produced by private companies and appear to mimic and recreate the administrative burden of APP. What’s more, they seem to have managed to take the previously complex system, and add further levels of detail. Of course, they attempt to argue that this will improve assessment, but our experience tells us that this is not the case.

As Dylan Wiliam quite rightly said in the first principle in his excellent article in Teach Primary magazine:

A school’s assessment system could assess everything students are learning, but then teachers would spend more time assessing than teaching. The important point here is that any assessment system needs to be selective about what gets assessed and what does not…

The problem with the new models which attempt to emulate APP is that they fail in this. They’re trying to add a measure to everything and so suggest that they are more detailed and more useful than ever before. But the reality is that this level of detail is unhelpful: the demands of time outweigh the benefits.

Once again, too many school leaders are confusing assessment with tracking. The idea that if we tick more boxes, then our conclusions will be more precise is foolish. If three sub-levels across a two-year cycle was nonsense, then 3 sub-levels every year can only be worse. Just because the old – now discredited – system allocated point scores each year, doesn’t mean that we should continue to do so.

Assessment is not a simple task. By increasing the volume of judgements required, we reduce teachers’ ability to do it well: we opt for quantity over quality. We end up with flow-charts of how to make judgements, rather than professional dialogue of how to assess learning. We end up with rules for the number of ticks required. As Wiliam also says:

Simplistic rules of thumb like requiring a child to demonstrate something three times to prove they have ‘got it’ are unlikely to be helpful. Here, there is no substitute for professional judgement – provided, of course, ‘professional’ means not just exercising one’s judgement, but also discussing one’s decisions with others

If you’re a headteacher who has brought in a system (or more likely, bought into a system) which implies that progress can be measured as a discrete level (or stage, or step) every term, that asks teachers to assess every single objective of the National Curriculum (or worse, tens of sub-objectives too!), or that prides itself on being akin to APP, then shame on you. There’s no excuse for taking an opportunity where the department itself points out that teachers are being expected to do an unreasonable amount of work, and replacing it with a larger load.

If you’re a teacher in a school that has adopted one of these awful systems, then I can only commiserate. Might I suggest that you print off a copy of this blog, and slide it under your headteacher’s one night. I’d also highly recommend adding Dylan Wiliam’s article to it.

We need our school leaders to lead – not just repeat the mistakes of the past.



Teach Primary magazine

It’s only right that I confess that I write an article for each issue of Teach Primary and so couldn’t fairly be said to be completely impartial. That said, I do think it’s well worth subscribing, if only for gems like Wiliam’s article and others that come up each issue, along with resources, ideas and wisdom from actual teachers and leaders. http://www.teachprimary.com/