A consistent inconsistency

With thanks to my headteacher for inadvertently providing the blog title.

With Justine Greening’s announcement yesterday we discovered that the DfE has definitely understood that all is not rosy in the primary assessment garden. And yet, we find ourselves looking at two more years of the broken system before anything changes. My Twitter timeline today has been filled with people outraged at the fact that the “big announcement” turned out to be “no change”.

I understand the rage entirely. And I certainly don’t think I’ve been shy about criticising the department’s chaotic organisation of the test and errors made. But I’m also not ready to throw my toys out of the pram just yet. This might just be the first evidence that the department is really listening. Yes, perhaps too little too late. Yes, it would have been nice for it to have been accompanied by an acknowledgement that the problems were caused by the pace of change enforced by ministers. But maybe they’re learning that lesson?

For a start, there are many teachers nationally who are just glad of the consistency. As my headteacher said earlier today, it leaves us with a consistent inconsistency. But nevertheless, there will be many teachers who are relieved to see that the system is going to be familiar for the next couple of years.

It’s a desire I can understand, but just can’t go along with. There are too many problems with the current system – mostly those surrounding the Teacher Assessment frameworks and moderation. But I will hang fire, because there is the prospect of change on the horizon.

It’s tempting to see it as meaningless consultation, but until we see the detail I don’t want to rule anything out. I hope that the department is listening to advice, and is open to recommendations – including those which the NAHT Assessment Reform Group of which I am a member is drawing together over this term.

If the DfE listens to the profession, and in the spring consults on a meaningful reform that brings about sensible assessment and accountability processes, then we may eventually come to see yesterday’s announcement as the least bad of the available options.

Of course, if they mess it up again, I’ll be on their case.

The potential of Comparative Judgement in primary

I have made no secret of my loathing of the Interim Assessment Frameworks, and the chaos surrounding primary assessment of late. I’ve also been quite open about a far less popular viewpoint: that we should give up on statutory Teacher Assessment. The chaos of the 2016 moderation process and outcomes was an extreme case, but it’s quite clear that the system cannot work.

It’s crazy that schools can be responsible for deciding the scores on which they will be judged. It has horrible effects on reliability of that data, and also creates pressure which has an impact on the integrity of teachers’ and leaders’ decisions. What’s more, as much as we would like for our judgements to be considered as accurate, the evidence points to a sad truth: humans (including teachers) are fallible. As a result, Teacher Assessment judgements are biased – before we even take into account the pressures of needing the right results for the school. Tests tend to be more objective.

However, it’s also fair to say that tests have their limitations. I happen to think that the model of Reading and Maths tests is not unreasonable. True, there were problems with this year’s, but the basic principles seems sound to me, so long as we remember that the statutory tests are about the accountability cycle, not about formative information. But even here there is a gap: the old Writing test was scrapped because of its failings.

That’s where Comparative Judgement has a potential role to play. But there is some work to be done in the profession for it to find its right place. Firstly we have to be clear about a couple of things:

  1. Statutory Assessment at the end of Key Stages is – and indeed should be – separate from the rest of assessment that happens in the classroom
  2. What we do to judge work, and how we report that to pupils and parents are – and should be – separate things.

Comparative Judgement is based on the broad idea of comparing lots of pieces of work until you have essentially sorted them into a rank order. That doesn’t mean that individuals’ ranks need be reported, any more than we routinely report raw scores to pupils and parents. It does, though, offer the potential of moving away from the hideous tick-box approach of the Interim Frameworks.

Teachers are understandably concerned by the idea of ranking, but it’s really not that different from how we previously judged writing. Most experienced Y2/Y6 teachers didn’t spend hours poring over the level descriptors, but rather used their knowledge of what they considered L2/L4 to look like, and judged whether they were looking at work that was better or worse. Comparative Judgement simply formalises this process.

It particularly tackles the issue that is particularly prevalent with the current interim arrangements: excellent writing which scores poorly because of a lack of dashes or hyphens (and poor writing which scores highly because it’s littered with them!). If we really want good writing to be judged “in the round”, then we cannot rely on simplistic and narrow criteria. Rather, we have to look at work more holistically – and Comparative Judgement can achieve that.

Rather than teachers spending hours poring over tick-lists and building portfolios of evidence, we would simply submit a number of pieces of work towards the end of Year 6 and they would be compared to others nationally. If the DfE really wants to, once they had been ranked in order, they could apply scaled scores to the general pattern, so that pupils received a scaled score just like the tests for their writing. The difference would be that instead of collecting a few marks for punctuation, and a few for modal verbs, the whole score would be based on the overall effect of the piece of writing. Equally, the rankings could be turned into “bands” that matched pupils who were “Working Towards” or “Working at Greater Depth”. Frankly, we could choose quite what was reported to pupils and parents; the key point is that we would be more fairly comparing pupils based on how good they were at writing, rather than how good they were at ticking off features from a list.

There are still issues to be resolved, such as exactly what pieces of writing schools would submit for judgement, and the tricky issue of quite how independent the work should be. Equally, the system doesn’t lend itself as easily to teachers being able to use the information formatively – but then, aren’t we always saying that we don’t want teachers to teach to the tests?

Certainly if we want children’s writing to be judged based on its broad effectiveness, and for our schools to be compared fairly for how well we have developed good writers, then it strikes me that it’s a lot better than what we have at the moment.

Dr Chris Wheadon and his team are carrying out a pilot project to look at how effective moderation could be in Year 6. Schools can find out more, and sign up to join the pilot (at a cost) at: https://www.sharingstandards.com/


A sinister turn at the DfE

I had an interesting discussion this week with a colleague who – very reasonably – questioned the merits of blogging and tweeting about issues at the DfE. Indeed, sometimes I have myself felt a pang of guilt about my posts, and frequently some sympathy for those who work in the department. Nevertheless, my argument in favour of such posts and tweets – not just my own – was one of holding government to account. That seems all the more important in the current circumstances with the opposition parties. And even more so tonight.

The majority of my followers are probably primary school teachers, so at first glance this is a story that wouldn’t necessarily affect or bother them, but if that’s you, I want you to read this, because it matters.

People often thank me for saying what they – or their colleagues, or sometimes (somewhat hyperbolically) the whole profession – are thinking. I hope that in some small way my words might represent some views held within schools that the DfE ought to hear, and that they might sometimes reach those who need to hear them. But I also know that my input is limited.

For government to be properly held to account we rely on the opposition benches, the parliamentary system, and a free press. Except the first is a disaster area at the moment, and that last one is under threat.

It seems that the same governing party which felt it so important to defend the merits of a free press after the hacking scandals, has decided that such freedom to scrutinise things shouldn’t apply to those questioning the DfE. They have created new rules that insist that when organisations use DfE data, their findings must be sent to the department 48 hours before being published.

It may be the thin end of a very sinister wedge; it may just be a desperate attempt to cover-up some of the disasters that seem to beset the department, but it isn’t a legitimate part of democratic governance. It isn’t acceptable that a department be allowed to prevent publication – for whatever period – of evidence and argument merely because it might seem inconvenient or unwelcome to them. It isn’t acceptable that a press that is free to investigate other organisations or publish details of individuals private lives should not also have the freedom to publish evaluations of government action.

Organisations like FFT and its research arm Education Datalab do invaluable work in informing the profession, providing context for national policy, and providing evidence to challenge and support government policy. Newspapers like the TES and Schools Week play a vital role in ensuring that the public is well-informed about hugely important issues that might otherwise be ignored. To try to hamper that work because it presents inconveniences for the politicians is unacceptable.

At best it seems like a childish tantrum got out of hand; at worst, it has echoes of the very worst of governments that try to manage the media to suit their purposes. And like with so many things, if this is allowed to happen, then what is next?

See Schools Week article:

Academics must show research to government two days before publishing, say new DfE rules

How not to sell things to me and my school

Maybe it was just a matter of time. As I enter my third year in the same school, it seems that both my name and email address have made it onto sales lists in various places… and I’m not pleased about it. More to the point, I’m not pleased with the time it’s taking up. Not only mine – in deleting and unsubscribing to emails – but more importantly to colleagues in my school office who are now faced with phone calls asking for me by name.

Worse, some of those phone calls are made pretending that we have some sort of prior relationship. I deal with a lot of people, and don’t always recall every detail, so I am highly frustrated when I take such a call only to find it is a generic sales call. The same result is achieved when I open another email trying to sell me something that I have never showed any interest in.

Now, I realise this is futile, but my frustration has no other outlet, so from today I’m going to keep a record of those companies who have somehow got hold of my name/school/email address and use it to “spam” me or my school office.I will tell them each that it is not a good way to sell to me, but worse, that it actually puts me off buying from them at all – and maybe now I’ll put a few more people off too.

Buying my email address to send spam mail isn’t acceptable, and wasting the time of busy office staff in my school isn’t either. And these companies are to blame for it (this week alone so far!):

  • National Schools Partnership
  • ParentPay
  • GL Assessment
  • Eureka for Schools
  • eCadets
  • National Schools Training
  • Think Global Schools



Getting started with FFT data for KS2

School leaders are used to dealing with change, not least when it comes to assessment data, but this year is in a league of its own. With changes to all the tests, teacher assessment, scaled scores and accountability measures, headteachers would be forgiven for despairing of any attempt to make sense of it.

Even when Raise becomes available, there’s no saying how easy it will be to interpret, not least because of all the changes this year. However, the FFT Summary Dashboard is available from today (Wednesday 14th), allowing you to make headway into that first stage of data analysis to evaluate your school’s strengths, and pick out areas for further development. In today’s climate, any help with that will be welcome!

The first glance of your dashboard will give you a very quick visual representation of your key headline figures – attainment and progress – related to those that will feature in performance tables and be published on your school website. In FFT these are represented in the form of comparison gauges:


Comparison gauges that show key figures at a glance

The beauty of this is the clarity they provide compared to the complexity of the published data and its confidence intervals. In short: the middle white zone shows that you’re broadly in line with national outcomes; the red and green bands at either end suggest significant lower or higher results. This will be particularly helpful for governors who are either shocked by changes in numbers from the old system, or who are concerned about small negative values on the progress measures.


The dashboard offers more clarity, too, about specific groups within your school. With a changing landscape it can be hard to know what to expect, but the pupil group analysis will quickly tell you which specific groups – girls, middle attainers, free school meals – have performed particularly well, and which seem not to be keeping up. It’s a simple overview that makes a good starting point for further investigation.


Quick identification of groups that have done particularly well, or poorly (green plus symbols show significant values)

It’s worth remembering, though, that some groups may be very small in your school: if you’ve only got a handful of girls, then don’t get too worked up over variations!

The dashboard also helps to pick out trends over time – another challenge when all the goalposts seem to have moved. By comparing the national results to previous years, FFT have been able to plot a trajectory that compares how attainment and progress might have looked in 2014 and 2015 under the current system. As a result, you can begin to see whether your school has improved by comparison to the national picture.


The time series shows your previous results adjusted to bring them more closely into line with the new frameworks. Not perfect, but a very telling ‘starter for ten’!

A caveat here: this is much more difficult with the writing judgements which are much less precise than the scaled scores. Take that alongside the evident variation in writing outcomes this year, and you may want to look deeper into those figures before making any quick judgements.


Groups analysed

Further into the summary dashboard itself, we get into the detail of vulnerable groups and of the separate subjects. Again, you get an overview that helps to pinpoint areas to look into further. Specific groups remain a clear focus for Ofsted and other inspections, so this information will be vital to leaders. The further breakdown of subjects will be of interest too, and of particular use in schools where writing has been affected by the national inconsistencies. Again these sections allow you to compare your attainment and progress to the national picture, and also to reflect on how your results may have changed over time.

No doubt, by the time school leaders and governors have begun to look at their summary overview, there will be many more questions asked. That’s where the FFT Aspire platform can help. Using your summary as a starting point, you can explore each element in greater detail, filtering your results for different groups, or subjects – even down to the level of individual pupils. It will help you to unpick the measures that are likely to feature on your Raise Online profile when it arrives, and with others too, including using contextual information about your pupils to compare to similar groups elsewhere.  Alongside the target-setting and other elements of FFT, you have a wealth of information at your fingertips that can be used to focus your school improvement planning – the summary dashboard is just the start.


This post was written with the support of FFT in preparation for the launch of the new dashboards on 14th September 2016.

Writing for a Purpose (or 4!)

For some time now I have been working on a model of teaching Writing built around the idea of longer blocks focusing on fewer things. Previously I have written about a model I used in my previous school, and since then have had many requests for more information.

This year I have finally produced some notes about the model I use, based on 4 Writing Purposes. My view is that rather than trying to teach children 10 or more different ‘genres’ or ‘text types’ as we used to do in the days of the Writing Test, rather it is better to focus on what those types have in common. It means that at my school we use 4 main types of writing across KS1 and KS2: Writing to entertain; to inform; to persuade; and to discuss.*


The 4 main writing purposes, and some of the ‘text types’ that could fall under each.

Importantly, by the end of KS2 I’d hope to see children recognise things like the fact that newspaper articles could actually fall under any or all of the 4 headings: they’re not a distinct type in themselves, really.

As a very rough rule, I’d expect around half of curriculum time to be taken up by “Writing to entertain”, with the remaining non-fiction elements sharing the remaining time. Notably in KS1 the non-fiction focus is only on Writing to inform.


Example guidance note

To support structuring the curriculum in this way, I have now compiled some guidance notes for each category. I say compiled, rather than written, because much of the legwork on these notes was done by my wife – @TemplarWilson – as she rolls out a similar model in her own school.

The guidance notes attempt to offer some indications of National Curriculum content that might be covered in each section. This includes some elements of whole-text ideas, suggestions for sentences and grammar, notes on punctuation to include, and also some examples of conjunctions and adverbials.

They’re not exhaustive, nothing radical, but as ever, if they’re of use to people, then I’m happy to share:
4 Writing Purposes – guidance (click to download)

Alongside the guidance sheets, I also have the large versions of the 4 main roadsign images, and an example text for each of the four purposes. The example texts are probably of more use at the upper end of KS2, and could almost certainly be improved, but they are a starting point for teaching and analysis by the children to draw out key features, etc. Both can be downloaded here:

4 Writing Purposes – Roadsign Images

4 Writing Purposes – Example Texts

*Secondary English teachers may recognise these as being loosely linked to the old writing triplets at GCSE level.

Teachers aren’t that special

We’re a funny lot, teachers.

It’s different to most jobs I guess. For a start, we get 13 weeks holiday a year. We also work in strange circumstances that are simultaneously both very public and quite private.

We also seem to have an on-going struggle with what it means to a profession, that doesn’t seem to affect other roles. Or rather, an on-going clamour to be considered a profession, without being clear about what that means.

The College of Teaching has served to highlight some of those troubles, but also one other: we seem to have reached a point in the profession where “leaders” can be lumped together as a “them” who are not in any way connected to “us” at the chalkface. (Disclaimer: I don’t know which group I end up in according to those determined to divide in this way)

I suspect that this is based, in part, on a truth: some school leaders are awful. Some who reach the position of headteacher (or Executive Head for that matter, I suspect), probably weren’t very good classroom teachers, and aren’t very good leaders. They can damage schools, teachers and pupils in the process. But to presume that such negative experiences mean that all those who have a leadership responsibility are in opposition to those who teach in classrooms is childish. Not least because it fails to account for the huge number of people – particularly in primary schools – who manage both leadership roles and considerable classroom teaching commitments.

This has come to a head from the small group of vocal opponents to the College of Teaching, particularly since the appointment of a very experienced headteacher to the role of Chief Executive. For some, led by Andrew Smith (@oldandrewuk), only a practising classroom teacher would have been acceptable to lead an organisation that they don’t even think should exist.

The problem with that argument is clear: what experience does the average classroom teacher have that would equip them to lead a significant organisation? There will, of course, be a handful of classroom teachers who have prior experience in other roles that might match the job description, but they are rare. And often such people would quickly take on leadership roles within schools, hence disqualifying them from this very narrow field.

What’s more, I’d argue that being the CEO of a large organisation doesn’t require the skills of a classroom teacher, any more than running British Airways would require you to be trained pilot. Running large organisations requires  a specific skill-set, and if the College is to be a success, then it needs the right people with those skills at its head. The fact that within teaching we have excellent school leaders who have the appropriate skills means we are able to appoint the combination of leadership and teaching experience.

Looking at other professional organisations, there is a mix  when it comes to the CEO role: the CEO of the Law Society is a trained solicitor with considerable leadership experience; the CEO of the Royal College of GPs has a background in social work and charities and isn’t medically trained at all; the CEO of the Royal Institute of Chartered Surveyors has a background in marketing. I haven’t yet found a single professional body that has an entry-level professional at its head.

The reality is, teachers aren’t some superhuman species imbued with some professional brilliance that makes them better than GPs or Chartered Surveyors. We are trained for a job. And all the while that some of those teachers also acquire the skills to lead large organisations, it is great that we can have a qualified and experienced teacher at the head of a professional body; but let’s be serious: it’s not the talent for imparting phonics knowledge that is required to manage a large charity.

Of course, the real issue here is not the appointment of  the CEO. Those who are wholeheartedly opposed to the College – or who object to the way it has been developed – would likely have opposed any appointment, just as those who object to the existence of the BBC would never welcome a new Director General.

For those of us who would like to see if this thing can work, it strikes me that you would struggle to find a better starting point as CEO than Dame Alison Peacock – an experienced teacher and headteacher, a strong figurehead who is widely supported by the profession, and someone who has publicly spoken in the past against proposals from government.

Some will always be happy to throw stones, just as there are those who continue to criticise the BBC. Personally I hope that both groups are proven to be in a minority.

A foolish consistency – the Primary School disease?

Let me start by saying that I think consistency is vital in schools. Pupils need to know that the behaviour policy will apply equally to everyone, and be applied equally by everyone. If a school has a uniform, then rules about it should be fairly and consistently applied to all. Children in Year 4 are entitled to just as good teaching as children in Year 6.

But there are limits. And it seems that too many primary headteachers cross them, to my mind. Not all, of course, but too many. On Twitter today a perfect example was shared by Rosie Watson (@Trundling17):

There is a headteacher – or senior leadership team – somewhere that thought it was an useful use of its time to come up with a list of 30 “must haves” that include how the classroom door must be signed, and that pegs must be labelled in week 1.

I wasn’t even that surprised when I saw it, because I’ve known far too many schools get caught up in such nonsense. Display policies can sometimes be the most read in a primary school, and I’ve known them include things like:

  • drapes must be used to soften the edges of displays
  • all work should be double-mounted
  • topic boards must be changed at least every 2 weeks
  • all classrooms must display a hundred square
  • all staples must point in the same direction

The point is that none of these things is necessarily a bad thing. Indeed, the one about staples appeals to my slightly frenzied mind. But to dictate it to a staff of highly-trained professionals? To expect teachers to spend their time and energy on such things rather than planning and preparing for learning strikes me as crazy.

What surprised me most about Rosie’s post, though, was not the content –  I fear that’s all too common – but the fact that some headteachers then tried to defend such approaches. The claims were that it was a useful reminder, or helpful for new teachers.

I have two issues with this. Firstly, the list is very clearly presented as a list of expectations to be met and judged against – not just helpful reminders. Secondly, these are not all good uses of someone’s time. If they were recommendations that I was free to ignore (and believe me, I would ignore a good number of them), then that’s fine, but that’s clearly not the case here.

If a school is insistent that its classroom doors have name labels in a certain style, then it should organise this administrative task, not simply demand it of teachers. Teachers’ time should be spent on things that directly impact teaching and learning, and precious few of these do.

Sadly, such “non-negotiables” seem to have become something of  a norm in school, with headteachers thinking that the way they ran their classrooms is now the way that others should follow suit. But it’s madness.

Headteachers are well aware of the strategic/operational divide between governors and heads, but they should consider a similar separation from the involvement in classrooms. Absolutely it is the place of the headteacher to lead on matters of curriculum and learning, and even to set the broad principles and expectations for the “learning environment” (oh, how I hate that term!), but that’s not the same as specifying the date by which your pegs are labelled.

The only other argument that was tentatively put forward was for schools which are in “a category”. Now here, I have some sympathy with heads who take on a school where things are a mess. Sometimes a clear list of expectations helps to brings things out of a pit – but that clearly isn’t the case here. If classrooms are untidy, it’s reasonable to expect that they be tidy; if disorganised cloakrooms are delaying learning, then it’s reasonable to expect something to be done about it. But no school was ever put in Special Measures because boards were backed with ‘inappropriate’ colours, or because  a Year 6 classroom didn’t have a carpet area.

And if  a school is in measures, then it probably shouldn’t be wasting its attention on how the classroom door is labelled! Both the leadership team and the teachers more widely should be focusing on the things that make the most difference to teaching and learning. Of course expectations should be raised, but that doesn’t need to be done through a foolish consistency.

Headteachers and Senior Leadership teams: you are busy enough – don’t sweat the small stuff, and certainly don’t make others sweat it for you!

(P.S. I’m a real rebel: I don’t label pegs at all!)

For an indication of some of the mad things that are dictated in primary schools, take a look at this Storify in response to my tweet:

Some thoughts on KS2 Progress

Caveats first: these conclusions, such as they are, are drawn from a small sample of a little over 50 schools. That sample of schools isn’t representative: indeed, it has slightly higher attainment than the national picture, both in terms of KS2 outcomes, and in KS1 starting points. However, with over 2000 pupils’ data, it shows some interesting initial patterns – particularly when comparing the three subject areas.

Firstly, on Maths – the least controversial of the three subjects. It seems that – in this sample – pupils who achieved Level 2c at KS1 had an approximately 40% chance of reaching the new expected standard (i.e. a scaled score of 100+). That leaps to around 66% for those achieving L2b at KS1 (i.e. just short of the national average)


The orange bar shows the average of this sample, which is slightly higher than the national average of 70%

It’s important to note, though, that progress measures will not be based on subject levels, but on the combined APS score at Key Stage 1. The graph for these comparisons follows a similar pattern, as you’d expect:


Where fewer than 10 pupils’ data was available for any given APS score, these have been omitted.

There is an interesting step here between pupils in this sample on APS of 13 (or less) who have a chance of 40% or less of reaching the expected standard, while those scoring 13.5 or more have a greater than 50% chance of achieving the standard. (The dip at 12.5 APS points relates to pupils who scored Level 2s in Maths and one English subject, but a level 1 in the other, highlighting the importance of good literacy for achievement in KS2 Maths)

For Reading, the graphs look broadly similar in shape


Blue bar shows average of this sample at 67%, which is slightly higher than national average of 66%

Interestingly here the level 2c children scorers still have only 40% chance of meeting the expected standard, but those achieving 2b have a lower chance than in maths of reaching the expected standard (58% compared to 66% for Maths).

When looking at the APS starting points, there is something of a plateau at the right-hand end of the graph. The numbers of pupils involved here are relatively few here (as few as 31 pupils in some columns). Interestingly, the dip at 18.5 APS points represents the smallest sample group shown, made up of pupils who scored 2a/2b in the two English subjects, but a Level 3 in Maths at Ks1. This will be of comfort to teachers who have been concerned about the negative effect of such patterns on progress measures: it seems likely that we will still be comparing like with like in this respect.


It is in Writing that the differences become more notable – perhaps an artefact of the unusual use of Teacher Assessment to measure progress. Compared to just 40% of pupils attaining L2c in Reading or Maths achieving the new expected standard, some 50% of those in Writing managed to make the conversion, and this against a backdrop of teachers concerned that the expected standard was too high in English. Similarly, over 3/4 of those achieving Level 2b managed to reach the standard (cf 58% Reading, 66% Maths)


In contrast to the other subjects, attainment in this sample appears notably lower in Writing than the national average (at 70% compared to 74% nationally)

With the APS comparisons, there are again slight dips at certain APS points, including 18.5 and 19.5 points. In the latter case, this reflects the groups of pupils who achieved Level 3s in both Reading and Maths, but only a 2b in Writing at KS1, suggesting again that the progress measure does a good job of separating out different abilities, even using combined APS scores.


Of course, this is all of interest (if you’re interested in such things), but the real progress measures will be based on the average score of each pupil with each KS1 APS score. I’d really like to collect some more data to try to get a more reliable estimate of those figures, so if you would be willing to contribute your school’s KS1 and KS2 data, please see my previous blog here.

Spread of data

Following a request in the comments, below, I’ve also attached a table showing the proportions of pupils achieving each scaled score for the two tests. This is now based on around 2800-2900 pupils, and again it’s important to note that this is not a representative sample.


A few words on the 65% floor standard

There’s been much discussion about this in the last few days, so I thought I’d summarise a few thoughts.

Firstly, many people seem to think that the government will be forced to review the use of a 65% floor standard in light of the fact that only 53% of pupils nationally met the combined requirements. In fact, I’d argue the opposite: the fact that so few schools exceed the attainment element of the floor standard is no bad thing. Indeed, I’d prefer it if no such attainment element existed.

There will be schools for whom reaching 65% combined Reading, Writing & Maths attainment did not require an inordinate amount of work – and won’t necessarily represent great progress. Why should those schools escape further scrutiny just because they had well-prepared intakes? Of course, there will be others who have met the standard through outstanding teaching and learning… but they will have great progress measures too. The 65% threshold is inherently unfair on those schools working with the most challenging intakes and has no good purpose.

That’s why I welcomed the new progress measures. Yes it’s technical, and yes it’s annoying that we won’t have it for another couple of months, but it is a fairer representation of how well a school has achieved in educating its pupils – regardless of their prior attainment.

That said, there will be schools fretting about their low combined Reading, Writing & Maths scores. I carried out a survey immediately after results were released, and so far 548 schools have responded, sharing their combined RWM scores. From that (entirely unscientific self-selecting) group, just 28% of schools had reached the 65% attainment threshold. And the spread of results is quite broad – including schools at both 0% and 100%.

The graph below shows the spread of results with each colour showing a band of 1/5th of schools in the survey. Half of schools fell between 44% and 66%.

Combined attainment

Click to see full-size version

As I said on the day the results were published – for a huge number of schools, the progress measure will become all important this year. And for that, we just have to wait.


Since posting, a few people have quite rightly raised the issue of junior/middle schools, who have far less control over the KS1 judgements (and indeed in middle schools, don’t even have control over the whole Key Stage). There are significant issues here about the comparability of KS1 data between infant/first schools and through primary schools (although not necessarily with the obvious conclusions). I do think that it’s a real problem that needs addressing: but I don’t think that the attainment floor standard does anything to address it, so it’s a separate – albeit important – issue.