Category Archives: ofsted

Our Ofsted experience

I’m reliably assured that mentioning Ofsted is bound to get a spike in visits to one’s blog page, so let’s see.

About a month ago, we were thrilled to receive that lunchtime phone call that meant the wait was finally over. As any school with a ‘Requires Improvement’ label (or worse) will know, although perhaps never quite ‘welcome’, there comes a point where the Ofsted call is desired, if only to end the waiting. We wanted to get rid of the label, and so this was our chance.

We’d been “due” for a few months, but knew that it could be as late as the summer, so in the end, the second week after Easter didn’t seem so bad (particularly as it left us with a long weekend in the aftermath).OFSTED_good_logo3

So how did it go? Well, for those of you interested in grades, I am now the deputy headteacher of an officially GOOD school. It’s funny how that matters. Six weeks ago, I was just deputy of an unofficially good one.

But those of you still awaiting the call will be more interested in the process than the outcome, so let me start by saying that having spent the past 18 months building up my collection of “But Sean Harford says…” comments, I didn’t have to call upon it once. The team who visited us were exemplary in their execution of the process according to the new guidance and myth-busting in the handbook.

In the conversation on the day of the phone call, we covered practicalities, and provided some additional details to the lead inspector: timetables, a copy of our latest SEF (4 pages of brief notes – not War and Peace) and the like. And then we set about preparing. We had only just that week been collating teachers’ judgements of children’s current attainment into a new MIS, so it was a good opportunity for us to find out how it worked in practice!

We don’t keep reams of data, we don’t use “points of progress”, and we’ve gone to some length to avoid recreating levels. All for good reasons, but always aware that a ‘rogue’ team could find it hard to make snap judgements, and so make bad ones. The data we provided to the team was simple: proportions of each children in each year group who teachers considered were “on track” to meet, or exceed, end-of-Key-Stage expectations. We compared some key groups (gender, Pupil Premium, SEN) and that’s it. It could all fit on a piece of A4. So when it came to the inspection itself, there was a risk.

Day One

It may be a cliché to say it, but the inspection was definitely done with rather than to us. The first day included joint observations and feedback with the headteacher, as well as separate observations (we had a 3-person team). An inspector met with the SENCo, and the lead also met with English and Maths subject leaders (the former of which happens to be me!) and our EYFS leader.

The main question we were asked as subject leaders was entirely sensible and reasonable: what had we done to improve our subjects in the school? I think we both managed to answer the “why?” and “what impact?” in our responses, so further detail wasn’t sought there, but it was clear that impact was key.

Book Scrutiny

The afternoon of the first day was given over to book scrutiny. We provided books from across the ability range in the core subjects, as well as ‘theme’ books for each team. The scrutiny focused most closely on Years 2, 4 and 6, which fits both with the way we structure our classes and our curriculum and assessment approach. Alongside books, we provided print-outs for some children that showed our judgements on our internal tracking system. I’m not sure whether the focus was set out as clearly as this, but my perception of the scrutiny (with which both my headteacher and I were involved) was that the team were looking at:

  • Was the work of an appropriate standard for the age of the children? (including content, presentation, etc.)
  • Was there marking that was in line with the school’s policy? (one inspector described our marking – positively – as “no frills”, which I quite liked)
  • Was there evidence that children were making progress at an appropriate rate for their starting points?

They asked for the feedback policy in advance, and made connection to it briefly, but the focus on marking was mainly on checking that it met what we said we did, and that where it was used, it helped lead to progress. Some pages in books were unmarked. Some comments were brief. Not all had direct responses – but there was evidence that feedback was supporting progression.

Being involved in the process meant that we could provide context (‘Yes, this piece does look amazing but was quite heavily structured; here’s the independent follow-up’; ‘Yes, there is a heavy focus on number, but that’s how our curriculum is deliberately structured’, etc.). But it also meant a lot of awkward watching and wondering  – particularly when one inspector was looking closely at the books from my class!

The meeting at the end of the first day was a reasoned wander through the framework to identify where judgements were heading and what additional information might be needed. We were aware of one lower-attaining cohort, which was identified, so offered some further evidence from their peers to support our judgements. There was more teaching to be seen to complete the evidence needed for that. And there was one important question about assessment.

Assessment without levels

I had expected it. Assessment is so much more difficult for inspectors to keep on top of in the new world, and so I fully expected to have to explain things in more detail than in the past. But I was also slightly fearful of how it might be received. I needn’t have been this time. The question was perfectly sensible: our key metric is about children being “on track”, so how do we ensure that those who are not on-track (and not even close) are also making good progress?

That’s a good question; indeed it might even have been remiss not to have asked it! We were happily able to provide examples of books for specific children, along with our assessments recorded in our tracker to show exactly what they were able to do now that they couldn’t do at the end of last academic year. It gave a good opportunity to show how we focus classroom assessment on what children can and can’t do and adapt our teaching accordingly; far more important than the big picture figures.

Day Two

On the second day I observed a teacher alongside the lead inspector, and was again pleased by the experience. Like all lessons, not everything when perfectly to plan, but when I reported my thoughts afterwards, we had a sensible discussion about the intentions of the lesson and what had been achieved, recognising that the deviation from the initial plan was good and proper in the circumstance. There was no sense of inspectors trying to catch anyone out.

Many of the other activities were as you’d expect: conversations with children and listening to readers (neither of which we were involved in, but I presume they acquitted themselves well); meeting with a group of governors (which I also wasn’t involved in, but they seem to acquit themselves well too J); a conversation about SMSC and British Values (with a brief tour to look at examples of evidence around the school); watching assembly, etc.

Then, on the afternoon of day two we sat with the inspection team as they went through their deliberation about the final judgements. In some ways it’s both fascinating and torturous to be a witness in the process – but surely better than the alternative of not being!

As with any good outcome, we got the result we felt we were due (and deserved), and areas for feedback that aligned with what was already identified on our development plan for the forthcoming year. The feedback was constructive, formative, and didn’t attempt to solve problems that didn’t exist.

And then we went to the pub!

Is it time to bring Ofsted in from the cold?

There’s been a clear rehabilitation process over many months now, and both Mike Cladingbowl and Sean Harford have done a great deal to try to win over the support of the profession. But there is still a lot of damage to repair. Ofsted is not yet a friend of the profession at large.

And arguably, nor should it be. An inspectorate should not be too cosy with those whom it inspects. But it must garner the trust and respect of the profession if it is to achieve its best in raising standards within it.

In the past it definitely failed. Too often schools and teachers found themselves doing things for Ofsted which did not help children to make progress, and sometimes even distracted teachers from that all-important role. Ofsted was seen too often as a punitive scrutiny of the minutiae, rather than a healthcheck on the quality of provision in schools.

Hopefully things are changing. Three years ago, Tom Sherrington told us all:

Overlooking his scandalous failure to use the subjunctive form – clearly not secondary ready – it seemed a fair point. More recently, it’s a point that has often been echoed by Sean Harford:

https://twitter.com/jetpack/status/647453264666042369

So is it time to let go of a difficult past and try to re-integrate Ofsted into our society?

Because the alternatives may be worse.

Nobody welcomes being inspected. It’s a necessarily high stakes event, and in some ways the fact that it’s carried out be real people can make it feel worse. But surely the evaluation of our work by real people has got to be better than evaluation by data?

We all know of plenty of stories of schools with good results using worrying practices, or schools with easy intakes failing to challenge their pupils. Equally, we can all see examples of schools struggling to crawl their way up league tables who are nevertheless achieving great things with the pupils in their care. An appropriate and well-managed inspection process can appreciate these variations, can discuss situations with schools, and can still offer the necessary challenge. True, it may not be perfect – indeed, for too long it hasn’t been.

But the alternatives may be worse.

Regional Schools Commissioners have an unmanageable number of schools to monitor and so have already shown themselves to be dependent on overly-simplistic numerical data with too little thought for the detail behind it. Performance tables can only ever show the narrowest of views of what a school achieves. And if we’re honest, as much as a mutual support structure would be a delight, we are very unlikely to see such a diminishment of central accountability.

It’s tempting to say “Better the devil you know…”, but if we think that Ofsted is the devil, that doesn’t give us a lot of metaphorical scope for the alternatives.

Be careful what you wish for.

Unfair?

DfEOfsted.png

For whom do we toil?

When I was a young(er) teacher, I learnt a few tricks of observations.

As an NQT, I knew that my mentor wanted to see a calm environment; she saw it as an indicator of the all-important behaviour management. And so I obliged.

In later years I had a Head of Year for whom I always included something that engendered good engagement – all the better if it was on coloured paper. Another subject leader rated talk partners, and so they always appeared in lessons in which I was observed.

When marking became the thing, I’d always ensure that I grouped children in my observed lessons according the work I’d marked the night before. Rarely did I do it at any other time, but it ticked the box.

And then it was progress in the lesson. So every observed lesson, I ensured that I asked children to do something at the start of the lesson (often giving them rather too little time or guidance), before teaching them some new skill and asking them to try the task again, with evident improvement clear for the observer to see.

A cynic might suggest that these things didn’t help children make progress, but rather than created the illusion of progress for the observer.

And now it’s progress over time. But I’m a cynic.

The latest craze seems to be for hot and cold tasks and the like. Now I’m sure there are many arguments for this approach in some cases, but it seems that the main reason put forward is for its ability to “demonstrate progress over time”.

It’s the drawn out version of my “progress in a lesson” trick, to show progress over a period of days or weeks. It offers the evidence on  a plate to our external judges; it stops them from ‘catching us out’ on that tickbox in the Ofsted framework.

But frankly, if an inspector can’t see progress over time by looking in books, then either there is something very wrong with the books.. or the inspector!

Progress over time is when children go from using simple multiplication facts to being able to use the standard written method.

Progress over time is children who use repetitive sentence structures in September, are showing more variety by January.

Progress over time is a well-planned curriculum that builds on prior learning and extends pupils’ experiences.

We shouldn’t be finding ways of making progress over time evident; we need to be finding ways to make progress over time happen. The evidence will come. And if that means dragging the inspector to see it, then so be it.

The guide to preparing for Ofsted

So we’ve got a new handbook, but it’s a lot to wade through, so how’s this for a shortlist?

Things to do to prepare for Ofsted:

  • Steal a British Values policy from someone else’s website. Change the school name.
  • Get a British Values display up somewhere. Flags compulsory. A picture of the Houses of Parliament a bonus.
  • Teach the kids the British values. (Not necessarily in any depth, after all they’re meaningless… but make sure they can crowbar “the rule of law” into pretty much any answer they give to an inspector)
  • Teach the governors the British Values (see above)
  • Teach the governors everything in Raise. Everything.
  • Create some assessment without levels data. (You can achieve this by taking your old levelled data and changing the levels into some new code; they don’t need to understand it)
  • Teach the governors the new code
  • Make sure you prevent at least a couple of people moving up the payscale. This shows rigour.
  • Buy in enough tippex to anonymise the appraisal data for the last three years, but not so much that you can’t see that you prevented someone moving up the payscale.
  • Scour every book – ensure that every other page has a detailed comment, with pupil response (left-handed writing may help here)
  • Look closely at the marking quality in your school; re-write your policy so that it matches what inspectors will see. They can’t get you on that one, then.
  • Upload your curriculum, pupil premium policy, SEN policy, behaviour policy, sports funding report, governor checklist, QTS qualification, birth certificate and last will & testament to the school website.
  • Stick labels on pupil premium pupils’ books, trays, chairs, tables and ear lobes.
  • Print off your attendance data. All of it. At least thrice weekly, just in case.
  • Gather a shortlist of supportive parents. You may want to call them on the morning to ensure that they are available to loiter on the playground and say the right things

Oh… and if you can teach the kids well, all the better!

Herding cats would be easier.

When I posted yesterday, I genuinely wanted my post to be about the important aspects of feedback as opposed to marking. I only made a mention of the challenges of Ofsted because I knew that otherwise that would be the response I’d get.

However, as so often is the case, far from being the hidden elephant in the room, Ofsted is fully on display and always up for discussion.

It’s only fair to say that I really appreciate the direction in which the inspectorate appears to be moving, and I genuinely believe the intentions of those at the helm. But it’s the troops that worry me. For just as there are bad teachers (and probably more than many of us would care to admit), so there are bad inspectors that wield more power than any of us would care to have to deal with. And there are many, many inspectors.

Following the post, Sean Harford pointed out the clarification document (which I think is great), and Paul Garvey tried to persuade me that I should have the courage of my convictions. But as I said yesterday, I just don’t. I can’t. And I’m in the fortunate position of being only deputy – imagine how hard it might be for a Head to have such courage.

For the reality is, for all its claims about not having preferred marking styles, and not expecting detail dialogue and many other things, it’s hard to see change on the ground. I wrote only recently about one inspector who proudly tells audiences that they should stick with levels. Any school inspected by him can hope that he follows the guidance about not prescribing a system – but he made equally clear what he’d be looking for. Why would any head of an improving school take the risk of not providing it?

Of course, the reality is that the majority (probably the vast majority) of inspectors are excellent professionals. But we ask them to do a mammoth task. Not only must they judge the current success of a school, they must also ascertain whether or not it is improving, and if not, what is holding it back. All in the space of a few hours. Inevitably there will be obvious places to look, but they’re not always the right places.

I’ve been at my current school 9 months and I’m now confident that I have a grasp of what its needs are as part of its school improvement journey. They’re not the same things that I thought back in September, and they’re certainly not the same things the Ofsted team recommended when they visited before I arrived. Of course, now my colleagues and I have to strike the right balance between doing what is right, and doing what the last team said we must do.

The trouble for inspection teams looking for areas for development is that 2 days is no time at all. It inevitably leads to the obvious conclusions. As I explained to Paul Garvey, in my experience curriculum development and structure is massively undervalued in schools and so means that children don’t make the progress they might. But when was the last time anyone ever saw that in an inspection report? It’s much harder to pin down. Much easier to stick in a statement that says marking could be improved. Who could ever argue with that?

Paul’s argument was that it would have to be supported by evidence; mine is that such evidence doesn’t exist. We cannot possibly point to a causal link between any form of marking and resultant learning/progress. Of course you can ask children, or look at books, or speak to teachers, or a whole host of other things. But the reality is, unpicking why children do or don’t make progress is hard. As David Didau says: we’re using a metaphor to map a mystery whenever we make such judgements.

So when a lead inspector decides that marking should be blamed for poor progress – or even in an excellent school is a good reason for withholding the Outstanding grade, how can anyone argue? If the inspector chooses to state that infrequent, or imprecise or brief feedback is to blame, what possible evidence can you provide to contradict it.

It is easy for a lead inspector to draw the conclusion that if marking were “improved” (of course, without stating how), that progress would also improve. Yet we actually know next to nothing about the link here. We talk about feedback as a great intervention, yet we know little of the detail of what that should look like. I have never seen any report, review or research state how often written feedback should be given, or in what style. We simply cannot be that precise.

But Ofsted reports imply that we can. If an inspector has a preconceived idea of what marking should look like, and doesn’t see it, then there is nothing stopping him from putting that as a recommendation. It would be easy to find evidence to support it, since any such evidence is inevitably subjective; it’s much harder to prove that it’s wrong.

It’s worth stating again, I think Ofsted is trying to change. I also think they usually get inspection judgements right. But the recommendations – that’s much trickier, and yet has such an impact on the profession as a whole. So our whole system is built upon the recommendations of the best intentions of inspectors who all have their own opinions – and years of experience – of what marking (and many other things like it) should look like.

And trying to change that? I’ll stick to herding cats.

Top dog? No, thanks!

This morning, Sean Harford posted a fascinating question on Twitter:
https://twitter.com/jetpack/status/603422209185570816
And so I wrote this:

When I was looking for a deputy post, I couldn’t help but notice how few there were compared to the number of headships being advertised. I came to the conclusion that many people were reaching the position of deputy… And then sitting tight.

I deliberately sought out schools that Ofsted deemed to Require Improvement. Having been on the journey to Good as a middle leader I’d eventually enjoyed the challenge and the pleasure of reaching that goal (if not necessarily the whole journey). So now I am deputy in an RI-graded school, trying to do everything I can to help the school to improve.

I’m prepared to put in the hours. I’m certainly open to new evidence and approaches. I’m trying as hard as I can to strike the right balance between challenge and support of my colleagues in school.

But you can be sure that if my school’s headteacher decided to pack it all in tomorrow, I wouldn’t be putting my name in the hat!

That’s not to say that I’d never want to be a Head: my mind changes on that pretty much weekly. But who in their right mind would take on that challenge in the knowledge of what fate might befall you if things take a badly-timed turn?

Consider an example RI school. It’s not on a rough inner city sink estate or anything of the sort, but it has its challenges. Attendance is definitely a tougher challenge than in many schools in leafy suburbs. Attainment is definitely lower on intake. Parents naturally want the best for their children, but are not always able to provide it. Recruitment is hugely challenging.

Raising standards in these schools takes the work of the whole school community. But the buck stops in one place.

Imagine such a school gets an unexpectedly bad set of results one year. We know it happens.
And imagine it then gets a badly-led inspection team visit that year. We know it happens.

What then, the consequences for a headteacher who has perhaps been in post for 20 months? The stakes now are massive.

Of course, I’m not arguing that leading ‘Good’ schools is easy. But look at the data on Ofsted outcomes compared to intakes and you can see why the risks might at least be lessened. And true, there’s the risk of being deemed to be coasting now, so perhaps all headships will become equally unappealing in due course, which I guess certainly alters, if not solves, the problem.

But there is a reality to face about schools in challenging circumstances. Firstly they’re not rare. The catastrophic environments that make the press might be, but there are plenty of schools dealing with challenges in their communities and trying to do the best by the families they serve. Secondly, there’s no over-supply of excellent leaders ready to leap in and save them.
And high stakes inspection isn’t always helping.

So what should Ofsted do?

Firstly, I’d like to see new leaders given time. Not unfettered freedom to fail, but time to make the changes that will lead to visible impact before inspectors are forced to nail colours to the mast, and leaders to the cross.

Ideally, Ofsted would still have an involvement with the school. I think the link between an RI school and its HMI should be strengthened. In fact, ideally, I’d like to see all inspections led by an HMI who then remains responsible for any schools put into a category or RI. And that responsibility should be greater than a single check-up after twelve weeks. I’d like to see HMIs visiting at least termly to provide the robust challenge and guidance that may well be needed. That way, the same inspector who made the initial recommendations can also follow up on progress. There is still an issue of HMI having to judge progress against recommendations which they might not really agree with. And perhaps still a case of too many lead inspectors writing reports offering spurious targets for improvement, safe in the knowledge that they’ll be somebody else’s problem.

If inspectors stayed with a school on its journey to Good, then they could offer both challenge and support to leaders – particularly new ones – for up to 2 years before a new inspection takes place.

Of course, schools shouldn’t be allowed to avoid ever being inspected by repeatedly replacing the headteacher. But a linked HMI could recommend further inspection at any time if s/he felt it were needed or appropriate. If a school can be turned around in 12 months then early confirmation could be welcomed; if an HMI recognises progress towards Good is being made at an appropriate rate, then delaying an inspection to allow the school to focus on the task at hand ought not to be feared.

Of course, that means having enough high quality HMI available, and I don’t know if Ofsted yet has that capacity. But if not, perhaps that should be a priority?

Do I think that these changes alone will magic away the recruitment challenge, and encourage all those sitting deputies to step up? Probably not – there’s a lot more needs to be done by DfE ministers to change their tone in that respect… But it would certainly go a way to reducing the risk that we might one day end up with a nation of sitting deputies!

The danger of the ‘expert’

Back in 1998, Andrew Wakefield caused a stir when he produced a long-since discredited article suggesting a link between the MMR vaccine and autism. What resulted included a long-running debate, the retraction of the article by the publishing journal, and the striking off of the surgeon behind it.

But the damage was done. The fact that the original doctor behind the report has been struck off has not removed the problem from history. The reality is that when an ‘expert’ speaks, the average listener does not question his credentials, or investigate their own research: we rely on those offered up as experts to do this work for us and to guide the rest of us.

Now, my point is far less serious than that posed by the MMR controversy, and the mention of it merely illustrative. But there is an issue with ‘experts’ in our profession offering solutions which may actually cause more harm than good.

This week I was attending a conference in London hosted by The Key, at which I was speaking about my approach to assessment without levels. There were other schools represented, also sharing their own models, some of which I thought brilliant, others of which were – to my mind – awful. If nothing else, such events serve to ground me and make clear that I am no oracle.

However, at the same event, David Driscoll – an education consultant who also works as an additional inspector and as an “associate education expert” for The Key – was asked to speak on the topic of “Inspection of Assessment”. He was listed in the brochure as an expert, and the intention was clearly to provide leaders with guidance on managing new assessment systems in an Ofsted-friendly way.

Now, the fact that such reassurance is needed suggests that Ofsted’s messages about not having a pre-determined view on what assessment systems should look like are not yet trusted by the profession; the presentation given by this particular lead inspector demonstrates exactly why that is the case!

To credit Mr Driscoll, he did at least once state the official Ofsted view. However, he then proceeded to explain to delegates that data ought to be presented in standard forms, and that schools would be best advised to keep levels and simply re-write the descriptors.

I was astounded.

He continued to explain that schools needed to choose a starting and end measure and define a fixed measure of expected progress, saying that “you need a number”. Now, perhaps this was evidence of his own limited concept of how assessment and progress works, but it certainly isn’t a message that fits in with the direction of travel in education at the moment. Or at least, it oughtn’t be. But, of course, the problem is that he is “the expert”. And every headteacher there will have had at the back of their mind the realisation that he could have been the lead inspector of their next Ofsted visit.

Worryingly, he also stated with some aplomb that there was only one statutory requirement for reporting to parents (namely that we report on the progress a child has made). Clearly he isn’t familiar with The Education (Pupil Information) (England) Regulations 2005.

This was different to the usual doubts I might have about another school’s approach to assessment without levels. This was not some practising school leader musing on his current thinking (in fact, it appears from his website that Mr Driscoll hasn’t taught for over 25 years). This was someone presented as an expert, offering guidance on how data ought to be managed and presented for the purposes of Ofsted. It was advice that was likely to take priority over much of the other content of the day (including excellent presentations from people such as Katharine Bailey of the CEM at Durham).

It’s true, the problem is not a patch on the risks of poor advice about vaccinations or such things. But the root of the problem is the same: ‘experts’ with a poor message can present more danger than no message at all.

I live in hope that the Assessment Commission set up before the election soon helps to bring some guidance to the profession that quashes the nonsense spouted by ‘experts’ such as this, and ensures that Ofsted is supported to keep its inspectors in line!


For teachers unsure of how best to move forward with assessment, I cannot recommend strongly enough the article by Dylan Wiliam in Teach Primary magazine from last autumn:

Planning Assessment without levels – Dylan Wiliam

You can have too much of an Outstanding thing

I’ve never been a fan of the “Outstanding” label. I’m generally of the view that Ofsted would be much better focusing its energies on simply whether or not schools meet a required standard.
But in recent years the reverence afforded to schools which have at some point been graded as Outstanding has begun to far outstrip that which they necessarily deserve. And perhaps more importantly, the scrutiny which they are given does not match the freedoms they are afforded.
The decision to exempt Outstanding schools from inspection was always a mistake. We know from inspections forced upon previously Outstanding schools that they can slip from the pedestal – some dropping directly into RI or a category. Yet we continue to allow some schools to work for years unexamined. That’s particularly surprising considering the changes due to come in from September for ‘Good’ schools. The new ‘light-touch’ one-day review process could – indeed should – have been extended to all Outstanding schools too. Currently schools can trade for too long on an Outstanding label undeservedly. How soon would a desk analysis pick up weakness? When the school slipped so far as to Require Improvement? Only when things a more serious?
And perhaps none of this would be so problematic if it weren’t for the power we afford these schools. Teaching Schools must be Outstanding; when Ofsted looks for new additional inspectors, it turns only to Good and Outstanding schools; headteacher representatives on regional boards are drawn exclusively from Outstanding academies. If you’re fortunate enough to inherit a strong school, then barring disaster, the world is your Oyster.
But notice, none of these rules require Outstanding individuals. Rather it is those associated with Outstanding schools that are lauded. What of the excellent headteacher who has turned around three failing schools to make them consistently Good? Or the headteacher who leads his school through astounding challenges from external influences? Do these deserve less influence than the fortunate individual who inherits a school that happened to be Outstanding in 2007? How long can we keep this up. Is a ten-year-old Outstanding grade under completely different leadership still valid? Fifteen years?
The decision by Ofsted raises particular doubts. Most colleagues welcome the inclusion of more practising school leaders in inspection teams, but are leaders who are themselves exempt from inspection really the best candidates for the role? As the inspectorate attempts to salvage its reputation from the nonsense of preferred methodology and approaches, are the headteachers who profited under the older, increasingly discredited, system really likely to be the drivers of change?
Of course, there will be many headteachers who have turned schools around in difficult circumstances, and whose wisdom and experience we out to use art the system level. But let’s not confuse outstanding headteachers with Outstanding schools; the two are not always synonymous.

As an aside, it’s worth noting that the threshold to reach Outstanding may just become a little more challenging from September, in terms of inspection if not quality. Good schools will face a single day’s inspection to check they are still good before being hit with a further two-day visit to complete a full Section 5 inspection to consider whether they are outstanding. One might wonder if there aren’t incentives there for school leaders who want to be left alone to do their job well to ensure that they aren’t at risk of being thought outstanding. Increasingly we see good heads aiming for their own excellence rather than that of the Ofsted ilk. Might we miss out on more good systems leaders simply because they refuse to play the Ofsted game?

Dear Assessment Commissioners…

Today we found out who would be on the new Assessment Commission to support schools, and I’ll confess to being slightly disappointed. Not just because I wanted to be on it (any opportunity to make people listen to me!), but because having been promised a teacher-led commission, there isn’t a single practising teacher on the commission. Headteachers are all well and good, but the reality is that very few headteachers have direct responsibility for assessment; I’m not suggesting for a second that they don’t have a place on the commission, but someone who has actual daily responsibility for working with the ins-and-outs of assessment in the classroom seems to me a glaring omission.

I’m also disappointed that despite the primary sector making up 2/3rds of the relevant year groups for which assessment without levels, and the fact that levels was the basis of our statutory end-of-school assessment, the sector seems rather poorly represented.

That said, I do not suppose that the commission will be a bad one. I have a lot of time for many of its members, and know that representatives such as Dame Alison Peacock will serve the primary sector well. I am sure that they will all treat the role with the importance it deserves and attempt to do their best to support and enable schools. And it is with that mind that I offer unto them my thoughts on what they should and should not do as part of the commission

The biggest issue

I’ve done lots of work up and down the country this year looking at assessment without levels. In every case I’ve pointed out to schools, family groups, local authorities and others that the capacity already exists within the system for high quality assessment systems. Schools are already filled with qualified and experienced teachers who well know how to assess learning.

Yet there is still a sense of paralysis. School leaders are not yet taking the bull by the horns either to buy in or create assessment approaches. In almost every case, that paralysis is caused by the fear of accountability. Not because school leaders are fearful of being held accountable, but because they fear making educationally-sound decisions which then don’t marry up with the statistical demands of local authorities, the department and Ofsted. But at local authority level, the fear is the same: they daren’t advise schools about models or approaches, because they too are in the dark about what Ofsted will expect.

The issue that comes up time and again is measuring progress. Schools largely feel confident about assessing children’s learning. By and large they are happy about making judgements about attainment -although the changing goal posts of the Performance Descriptors haven’t helped that. What worries schools is the need to demonstrate progress between and within year groups across primary schools.

Note, the issue is not about children making progress, but about demonstrating it to outside agencies. We are so caught up in the APS model, and so used to being held to account on the basis of the tiniest blip in tiny samples of data, that everyone fears what might come next. It is all well and good for the DfE to say that schools should have free reign, and for Ofsted to say that they’ll work with what schools have, but until the days of the first inspection reports, school leaders feel paralysed.

Of course, the reality is that part of the reason for removing levels was exactly the crazy system of trying to measure progress through steps and bands. But schools did not impose that system upon themselves; that was driven by Ofsted and the department. The most pressing and urgent matter for the commission to turn its attention to is clarity from both overseers about what will be expected from schools.

School leaders are still fully expecting to be asked for half-termly data, showing steps of progress. They know that this is not how learning works – they always have – but the system has been in place so long that they cannot believe that it won’t be demanded again.

The commission should be honest about this. We know from publications from the new Education data lab this week that progress is simply not linear. If that is true over the long scale of several key stages, then it is undoubtedly more so over short periods such as a single term or half-term. The idea that progress can be measured in step every 6 weeks should quickly be debunked, and more rounded approaches put in place. That certainly makes things harder for schools in categories to show rapid improvement, and for external agencies to spot potential weaknesses… but the old system allowed those judgements to be made on the basis of very poor evidence.

Once that’s sorted…

If the issue of what will be expected in terms of demonstrating progress cannot be dealt with, then little else that the commission achieves will be worthwhile. The power of Ofsted to dictate practice – however unwittingly – cannot be underestimated.

So let’s presume that we get some clarity on this front. What else needs to be addressed?

Firstly, I’d like to see a set of meaningful principles set out that will help to guide school leaders. Naturally, I offer my own 7 questions as good starting point, but would welcome any improvement and addition to them that supported schools. Suffice it to say, I also think the DfE’s own effort could be improved upon considerably.

What else?

Importantly, the commission needs to recognise the weakness it suffers from in not having in practising teachers on board. We know only too well how work done in isolation can lead to a well-intentioned method becoming an unmanageable nightmare in practice. This could not be truer than with assessment. There are already plenty of good examples around of unwieldy systems that claim to be based on best practice, but which give little consideration to the workload of a typical teacher.

One of the mercies of the existing system was that – with experience – a teacher could bypass all the admin and simply use their professional knowledge to estimate a reasonable sub-level assessment. If some of the systems that provide endless ticklist items and minuscule ‘steps of progress’ are encouraged, then doubtless the nightmare of APP that Liz Truss was so happy to condemn will soon become a fond memory for teachers.

Dylan Wiliam explains this very well in his excellent article in Teach Primary magazine:

A school’s assessment system could assess everything students are learning, but then teachers would spend more time assessing than teaching. The important point here is that any assessment system needs to be selective about what gets assessed and what does not

This principle is particularly important because of the haphazard way in which the National Curriculum has been constructed. There is repetition in the English curriculum, and a lack of clarity between statutory and non-statutory content in the Maths. These two central elements of the primary curriculum are too jumbled in their statutory form to simply form the basis of all assessment.

The ‘Don’t do’ list

The commission must not recommend solutions. Guiding principles are great for supporting schools. However, if a DfE-commissioned group is seen to endorse a particular product, or group of products, then these will become the de facto expectations in schools. Schools which have good systems in place will feel compelled to throw out valuable work to conform to the recommendations. Good examples may well illustrate effective approaches, but the commissions reports must be careful to emphasise where these are examples, not exemplars.

Furthermore, the commission must not flinch from criticising approaches and methods which are unsound. I say this knowing that I have made my own suggested approach widely known, and yet recognise that it is flawed. The commission must not let its support be tempered by kindness to effort or concern for commerce. If a system is bad in some way, then it must be pointed out. If an approach has flaws, then we must be honest about them. The reality is that probably all assessment systems are flawed; it is only in recognising these flaws that we come to understand the strengths and weaknesses of what we’re working with.

The commission must not act as the DfE’s agent, or be fearful of encroaching on the work of Ofsted. If advice needs to be given to schools – or directly to the department or its agencies – then the commission must do so, and do so clearly. It will not have escaped the attention of the teaching community that many of the members of the commission are already well-known for working with the government and/or sharing its aims. The commission – if not teacher-led – must at least be seen to have the interests of the profession and its students as its first priority.

Finally, it is important that the commission is clear about the role of assessment. I have repeatedly revisited the mantra that Tracking is not the same as Assessment. The difference here is particularly significant given the role of headteachers on the commission. School leaders understandably have a desire and need for tracking data: their careers rest on the next set of results, and being able to predict them is important. However, the bread-and-butter of good teaching and learning is based much more on assessment – knowing what a child can and can’t do. The commission should primarily concern itself with assessment, before identifying how such assessments can be used to support tracking.

Is that it?

One final disappointment of today’s announcement is that we still don’t have sight of the terms of reference of the commission. This is significant, because the areas in which they could work are expansive. I hope that terms will be published imminently.

Doubtless that once they are, I may find new things that need raising, and similarly as any reports emerge from the commission, I am sure there will be more to say.

Of course, one advantage of not being on the commission is the ability to say these things with impunity, and to pronounce publicly on the outcomes of any reports. I wish the commissioners good luck with this mammoth task – and look forward to challenging them thoroughly and constantly to ensure that their work brings benefits for the education of children in our schools – and hopefully their teachers!