Tag Archives: ofsted

What if some inspectors… are wrong?!

Just recently I got into a brief discussion with a headteacher who happened also to be an Ofsted inspector (and had been re-trained under the new in-house arrangements). I was suggesting that we know relatively little about what constitutes effective marking, and therefore it’s hard to make judgements about what a good policy might look like.

The disagreement was outright. This headteacher maintained, with some considerable confidence, that they could tell whether marking was effective just by looking at a few books.

And I couldn’t disagree more.

For as far as I can tell, there is relatively little (if any) research easily available out there about what constitutes effective marking. The EEF toolkit offers very strong indications that feedback is an effective tool for increasing progress, but feedback and marking are not necessarily synonymous.

The toolkit itself sets out a definition of feedback:

Feedback is information given to the learner and/or the teacher about the learner’s performance relative to learning goals. It should aim to (and be capable of) producing improvement in students’ learning. Feedback redirects or refocuses either the teacher’s or the learner’s actions to achieve a goal, by aligning effort and activity with an outcome.

none of which requires that feedback be given in the form of written marking.

The problem is, a shared wisdom has grown up around marking that can easily be explained, but not necessarily justified. The explanation is simple: people quite rightly pointed out that if marking didn’t lead to some ‘redirection’ or ‘refocusing’ of the learner’s actions, then it was probably wasted. But rather concluding, therefore, that much written marking was useless, instead the presumption became that all marking would be effective if out led to some sort of action. So the marking load continued to increase, and the complexity with it.

I can’t see anywhere that it shows that that was the right conclusion to reach, but it has now become so widely held a view, that it’s hard to argue against. What’s worse: it’s very easy to go from that perceived wisdom to thinking one can spot effective marking. I can certainly identify marking that meets the expected norms of dialogue and “DIRT” and the like. But that’s not necessarily the same as it being effective.

My personal view – equally unsupported by evidence – is that the vast majority of marking is wasteful. As I’ve said before, there’s a real diminishing return after more than a few seconds of looking at work, and by the time it has been marked in detail and acted upon, that return may well be negated by the effort expended. (See Is marking the enemy of feedback?) A whole host of feedback can occur (both to teacher and student) without a pen ever touching the page.

I don’t suggest scrapping marking, but merely point out that whether I’m right or not is frankly academic.

Because for all the clarification documents in the world from Ofsted, nothing will give me the freedom to demonstrate that I can be equally effective without excess marking all the time there are inspectors who believe that they can tell effective marking just by looking at it. The argument from authority of inspectors is impossible to fight against.

And it’s not the first time it’s happened. I’ve had more than one public ‘spat’ in recent months with inspectors who argue that they know better. And it’s easy for them to claim they know better because they’ve been trained. Or they’re well-qualified. Or they’ve been inspecting for x years.

But what if we don’t know? What if that confidence is false? What if what hundreds of inspectors think is ‘effective marking’ is actually just wasteful annotation, but there’s no evidence to show otherwise?

I don’t know if I’m right. I may well not be. But I’m not yet convinced that people like the inspectors I’ve talked to recently are either. Unfortunately, all the time they have authority on their side, they can maintain their false confidences in what they believe that should see.

And no amount of ‘clarification’ from the top will make the slightest difference, all the time inspectors are free to state without a hint of doubt that they know what effective marking looks like. So woe betide any of us who doesn’t conform to their expectations.

Herding cats would be easier.

When I posted yesterday, I genuinely wanted my post to be about the important aspects of feedback as opposed to marking. I only made a mention of the challenges of Ofsted because I knew that otherwise that would be the response I’d get.

However, as so often is the case, far from being the hidden elephant in the room, Ofsted is fully on display and always up for discussion.

It’s only fair to say that I really appreciate the direction in which the inspectorate appears to be moving, and I genuinely believe the intentions of those at the helm. But it’s the troops that worry me. For just as there are bad teachers (and probably more than many of us would care to admit), so there are bad inspectors that wield more power than any of us would care to have to deal with. And there are many, many inspectors.

Following the post, Sean Harford pointed out the clarification document (which I think is great), and Paul Garvey tried to persuade me that I should have the courage of my convictions. But as I said yesterday, I just don’t. I can’t. And I’m in the fortunate position of being only deputy – imagine how hard it might be for a Head to have such courage.

For the reality is, for all its claims about not having preferred marking styles, and not expecting detail dialogue and many other things, it’s hard to see change on the ground. I wrote only recently about one inspector who proudly tells audiences that they should stick with levels. Any school inspected by him can hope that he follows the guidance about not prescribing a system – but he made equally clear what he’d be looking for. Why would any head of an improving school take the risk of not providing it?

Of course, the reality is that the majority (probably the vast majority) of inspectors are excellent professionals. But we ask them to do a mammoth task. Not only must they judge the current success of a school, they must also ascertain whether or not it is improving, and if not, what is holding it back. All in the space of a few hours. Inevitably there will be obvious places to look, but they’re not always the right places.

I’ve been at my current school 9 months and I’m now confident that I have a grasp of what its needs are as part of its school improvement journey. They’re not the same things that I thought back in September, and they’re certainly not the same things the Ofsted team recommended when they visited before I arrived. Of course, now my colleagues and I have to strike the right balance between doing what is right, and doing what the last team said we must do.

The trouble for inspection teams looking for areas for development is that 2 days is no time at all. It inevitably leads to the obvious conclusions. As I explained to Paul Garvey, in my experience curriculum development and structure is massively undervalued in schools and so means that children don’t make the progress they might. But when was the last time anyone ever saw that in an inspection report? It’s much harder to pin down. Much easier to stick in a statement that says marking could be improved. Who could ever argue with that?

Paul’s argument was that it would have to be supported by evidence; mine is that such evidence doesn’t exist. We cannot possibly point to a causal link between any form of marking and resultant learning/progress. Of course you can ask children, or look at books, or speak to teachers, or a whole host of other things. But the reality is, unpicking why children do or don’t make progress is hard. As David Didau says: we’re using a metaphor to map a mystery whenever we make such judgements.

So when a lead inspector decides that marking should be blamed for poor progress – or even in an excellent school is a good reason for withholding the Outstanding grade, how can anyone argue? If the inspector chooses to state that infrequent, or imprecise or brief feedback is to blame, what possible evidence can you provide to contradict it.

It is easy for a lead inspector to draw the conclusion that if marking were “improved” (of course, without stating how), that progress would also improve. Yet we actually know next to nothing about the link here. We talk about feedback as a great intervention, yet we know little of the detail of what that should look like. I have never seen any report, review or research state how often written feedback should be given, or in what style. We simply cannot be that precise.

But Ofsted reports imply that we can. If an inspector has a preconceived idea of what marking should look like, and doesn’t see it, then there is nothing stopping him from putting that as a recommendation. It would be easy to find evidence to support it, since any such evidence is inevitably subjective; it’s much harder to prove that it’s wrong.

It’s worth stating again, I think Ofsted is trying to change. I also think they usually get inspection judgements right. But the recommendations – that’s much trickier, and yet has such an impact on the profession as a whole. So our whole system is built upon the recommendations of the best intentions of inspectors who all have their own opinions – and years of experience – of what marking (and many other things like it) should look like.

And trying to change that? I’ll stick to herding cats.

Top dog? No, thanks!

This morning, Sean Harford posted a fascinating question on Twitter:


And so I wrote this:

When I was looking for a deputy post, I couldn’t help but notice how few there were compared to the number of headships being advertised. I came to the conclusion that many people were reaching the position of deputy… And then sitting tight.

I deliberately sought out schools that Ofsted deemed to Require Improvement. Having been on the journey to Good as a middle leader I’d eventually enjoyed the challenge and the pleasure of reaching that goal (if not necessarily the whole journey). So now I am deputy in an RI-graded school, trying to do everything I can to help the school to improve.

I’m prepared to put in the hours. I’m certainly open to new evidence and approaches. I’m trying as hard as I can to strike the right balance between challenge and support of my colleagues in school.

But you can be sure that if my school’s headteacher decided to pack it all in tomorrow, I wouldn’t be putting my name in the hat!

That’s not to say that I’d never want to be a Head: my mind changes on that pretty much weekly. But who in their right mind would take on that challenge in the knowledge of what fate might befall you if things take a badly-timed turn?

Consider an example RI school. It’s not on a rough inner city sink estate or anything of the sort, but it has its challenges. Attendance is definitely a tougher challenge than in many schools in leafy suburbs. Attainment is definitely lower on intake. Parents naturally want the best for their children, but are not always able to provide it. Recruitment is hugely challenging.

Raising standards in these schools takes the work of the whole school community. But the buck stops in one place.

Imagine such a school gets an unexpectedly bad set of results one year. We know it happens.
And imagine it then gets a badly-led inspection team visit that year. We know it happens.

What then, the consequences for a headteacher who has perhaps been in post for 20 months? The stakes now are massive.

Of course, I’m not arguing that leading ‘Good’ schools is easy. But look at the data on Ofsted outcomes compared to intakes and you can see why the risks might at least be lessened. And true, there’s the risk of being deemed to be coasting now, so perhaps all headships will become equally unappealing in due course, which I guess certainly alters, if not solves, the problem.

But there is a reality to face about schools in challenging circumstances. Firstly they’re not rare. The catastrophic environments that make the press might be, but there are plenty of schools dealing with challenges in their communities and trying to do the best by the families they serve. Secondly, there’s no over-supply of excellent leaders ready to leap in and save them.
And high stakes inspection isn’t always helping.

So what should Ofsted do?

Firstly, I’d like to see new leaders given time. Not unfettered freedom to fail, but time to make the changes that will lead to visible impact before inspectors are forced to nail colours to the mast, and leaders to the cross.

Ideally, Ofsted would still have an involvement with the school. I think the link between an RI school and its HMI should be strengthened. In fact, ideally, I’d like to see all inspections led by an HMI who then remains responsible for any schools put into a category or RI. And that responsibility should be greater than a single check-up after twelve weeks. I’d like to see HMIs visiting at least termly to provide the robust challenge and guidance that may well be needed. That way, the same inspector who made the initial recommendations can also follow up on progress. There is still an issue of HMI having to judge progress against recommendations which they might not really agree with. And perhaps still a case of too many lead inspectors writing reports offering spurious targets for improvement, safe in the knowledge that they’ll be somebody else’s problem.

If inspectors stayed with a school on its journey to Good, then they could offer both challenge and support to leaders – particularly new ones – for up to 2 years before a new inspection takes place.

Of course, schools shouldn’t be allowed to avoid ever being inspected by repeatedly replacing the headteacher. But a linked HMI could recommend further inspection at any time if s/he felt it were needed or appropriate. If a school can be turned around in 12 months then early confirmation could be welcomed; if an HMI recognises progress towards Good is being made at an appropriate rate, then delaying an inspection to allow the school to focus on the task at hand ought not to be feared.

Of course, that means having enough high quality HMI available, and I don’t know if Ofsted yet has that capacity. But if not, perhaps that should be a priority?

Do I think that these changes alone will magic away the recruitment challenge, and encourage all those sitting deputies to step up? Probably not – there’s a lot more needs to be done by DfE ministers to change their tone in that respect… But it would certainly go a way to reducing the risk that we might one day end up with a nation of sitting deputies!

The danger of the ‘expert’

Back in 1998, Andrew Wakefield caused a stir when he produced a long-since discredited article suggesting a link between the MMR vaccine and autism. What resulted included a long-running debate, the retraction of the article by the publishing journal, and the striking off of the surgeon behind it.

But the damage was done. The fact that the original doctor behind the report has been struck off has not removed the problem from history. The reality is that when an ‘expert’ speaks, the average listener does not question his credentials, or investigate their own research: we rely on those offered up as experts to do this work for us and to guide the rest of us.

Now, my point is far less serious than that posed by the MMR controversy, and the mention of it merely illustrative. But there is an issue with ‘experts’ in our profession offering solutions which may actually cause more harm than good.

This week I was attending a conference in London hosted by The Key, at which I was speaking about my approach to assessment without levels. There were other schools represented, also sharing their own models, some of which I thought brilliant, others of which were – to my mind – awful. If nothing else, such events serve to ground me and make clear that I am no oracle.

However, at the same event, David Driscoll – an education consultant who also works as an additional inspector and as an “associate education expert” for The Key – was asked to speak on the topic of “Inspection of Assessment”. He was listed in the brochure as an expert, and the intention was clearly to provide leaders with guidance on managing new assessment systems in an Ofsted-friendly way.

Now, the fact that such reassurance is needed suggests that Ofsted’s messages about not having a pre-determined view on what assessment systems should look like are not yet trusted by the profession; the presentation given by this particular lead inspector demonstrates exactly why that is the case!

To credit Mr Driscoll, he did at least once state the official Ofsted view. However, he then proceeded to explain to delegates that data ought to be presented in standard forms, and that schools would be best advised to keep levels and simply re-write the descriptors.

I was astounded.

He continued to explain that schools needed to choose a starting and end measure and define a fixed measure of expected progress, saying that “you need a number”. Now, perhaps this was evidence of his own limited concept of how assessment and progress works, but it certainly isn’t a message that fits in with the direction of travel in education at the moment. Or at least, it oughtn’t be. But, of course, the problem is that he is “the expert”. And every headteacher there will have had at the back of their mind the realisation that he could have been the lead inspector of their next Ofsted visit.

Worryingly, he also stated with some aplomb that there was only one statutory requirement for reporting to parents (namely that we report on the progress a child has made). Clearly he isn’t familiar with The Education (Pupil Information) (England) Regulations 2005.

This was different to the usual doubts I might have about another school’s approach to assessment without levels. This was not some practising school leader musing on his current thinking (in fact, it appears from his website that Mr Driscoll hasn’t taught for over 25 years). This was someone presented as an expert, offering guidance on how data ought to be managed and presented for the purposes of Ofsted. It was advice that was likely to take priority over much of the other content of the day (including excellent presentations from people such as Katharine Bailey of the CEM at Durham).

It’s true, the problem is not a patch on the risks of poor advice about vaccinations or such things. But the root of the problem is the same: ‘experts’ with a poor message can present more danger than no message at all.

I live in hope that the Assessment Commission set up before the election soon helps to bring some guidance to the profession that quashes the nonsense spouted by ‘experts’ such as this, and ensures that Ofsted is supported to keep its inspectors in line!


For teachers unsure of how best to move forward with assessment, I cannot recommend strongly enough the article by Dylan Wiliam in Teach Primary magazine from last autumn:

Planning Assessment without levels – Dylan Wiliam

Is Ofsted leading schools to mis-direct their energies?

There is much to be said for Ofsted’s willingness to change over recent years, and for its recognition of the limitations of its capability. Its decision to bring all inspectors in-house should probably be welcomed; its abandonment of lesson gradings has been widely praised… but is it actually achieving its purpose of raising standards?

As both inspections and reports become briefer, there is a risk that the guidance that schools are given on improvements, rather than raising standards may actually serve to distract a school from the work of improving its provision. After all, 10 hours is barely long enough to get any idea of what a school is like, let alone to accurately work out what it needs to do to improve. Yet, for some reason, inspection reports now insist on setting out what needs to be done.

This is a relatively recent phenomenon, and one that seems only to have arisen as inspections have shortened. Take one school as an example – a primary school in my hometown. When inspected in 2004 it was satisfactory, ten years later it requires improvement. Reading of the reports suggests that the reasons are similar in both cases: progress in core subjects was not good enough (and hence outcomes not high enough given the favoured intake).

In 2004 it was inspected by 5 inspectors over 3 days (15 inspector-days in total, still a reduction from earlier inspections); in 2014 it had just 3 inspectors for 2 days – less than half the time. In 2004, inspectors limited themselves to indicating what needed to be improved, based on its more thorough inspection: it was for the governors (supported by the professionals who knew the school well) to set out a plan of how this was to be achieved:

2004

Compare this to the 2014 inspection, where after just 6 inspector-days of work it seems that Ofsted feels that it can tell exactly what needs to be done:

2014

Notice that the essential problem was the same: children were deemed to be making insufficient progress from their starting points. In the former case, it was for the school to set about improving that: Ofsted merely reported what it found. By 2014 Ofsted seems to see its role as directing those improvements.

This is almost certainly an understandable reaction to claims that Ofsted merely sat in judgement and failed to support schools to improve. However, does this really achieve that?

It strikes me that if children are not making enough progress during their primary years then the issues may well run deeper than making sure they’ve understood tasks in lessons and responding to marking. In fact, I’d argue that the first bullet point would be a ridiculous claim to make on the basis of a few lesson observations over 2 days. But isn’t that exactly the problem? That’s all the inspectors had to go on.

And so, no doubt, that school will now be investing its time and efforts into the bullet points put forward by Ofsted. When inspectors next return, tasks will be well-explained (although not necessarily well-chosen or used), mini-plenaries will abound to check that children know what they’re doing (although not necessarily learning), a new marking policy will have been developed (with the resulting dialogue, despite the recent clarification) and leaders will be checking on the quality of teaching and learning… by checking that tasks are being explained and mini-plenaries used.

Nowhere is there any advice that the school might look at the quality of its curriculum provision, or evaluate the relative strengths and weaknesses of its teaching and set out a plan accordingly. No: Ofsted has made its judgements on the basis of a few drop-ins, and that will now direct the school’s efforts for the next 2 years.

The fact is, two days is not long enough for an inspection team to ascertain what needs to be done to improve provision in a school. If it were, being a headteacher would be easy; consultants would be redundant; school improvement would be a picnic. By imagining that an inspection team have the knowledge or understanding of a school’s situation to effect improvements, we are being fooled. And by letting them dictate the direction of school improvement, how much time is being wasted in schools up and down the country in making changes to meet the bullet points, rather than to improve provision?

Increasingly it is becoming clear that flying inspection visits are not adequate for the real detail of school improvement; they can provide but a snapshot – even over a week. That’s not to say that the snapshot might not be useful; merely to note that an identification of the issues is not necessarily enough to propose a cure.

Maybe a medical model is worth considering? Inspectors can do a fair job as General Practitioners: brief check-ups and dealing with minor ailments, but where a school really needs improvement, perhaps it should be referred to the appropriate specialist for further examination and treatment. Otherwise we risk simply issuing the same simplistic treatments to everyone for everything.


Doubtless in many other schools there are teachers who know that they’re focussing on the wrong things because of Ofsted ‘bullet points’ – I’d welcome your comments telling me about them (anonymous comments welcome)

Primary Education: a year in review

No-one could deny that the past few years have been years of momentous change in education policy, and 2014 continued that trend. Any attempt to review it is bound to miss things, but I thought I’d give it a go, all the same. It was the year in which book scrutinies became the new lesson observations, and pay went up just as Gove went down.

January
The year began (if, dear teacher, you can conceive of a year beginning in January) with plenty of turmoil to deal with. Teachers were trying to get to grips with the new National Curriculum in preparation for September, leaders were battling with the proposed introduction of Free School Meals for infants. Things were not great for the department either – it was confirmed that one of the first primary school free schools was to be shut down.
The obligatory Ofsted changes revolved largely around Behaviour & Safety.

February
The NAHT released its report into the new landscape for assessment in schools. Having known for some time that levels were to be dropped, the department still hadn’t come up with any indication of what was to replace them, leaving schools somewhat in the lurch.
The STRB also published a report which led us to breathe a sigh of relief. The body had turned down Gove’s suggestions that limits on working hours be scrapped along with specified PPA and cover limits. It almost felt like a turning point.
Finally, the DfE released the long-awaited survey into teachers’ working hours showing that primary school teachers work an average of a 59-hour week. No surprises there then.

March
As teachers from the NUT were on strike again, the department finally got around to telling us what assessment would look like at the end of KS1 and KS2 – although as ever it raised more questions than answers, some of which still remain! We also heard from Ofsted the first of its proposals to reduce the stakes in inspections of existing Good/Outstanding schools.

April
Liz Truss used a speech to proclaim the new freedoms being given to schools, including on assessment, seemingly ignorant of the restrictions that the latest release would bring. She also reignited the debate about textbooks with a simplistic call in support. The QTS debate rose once again – rather tiresomely.
The obligatory Ofsted changes revolved largely tweaks to subsidiary guidance.

May
As Year 6s sat down for tests with misprints on the instructions and not a calculator in sight, panic started to set in for all schools about the September to come. It also became clear that it wasn’t only classroom teachers who lacked answers: the DfE didn’t seem that sure of many things, either.

June
As the rumblings in Birmingham continued, we started to hear the first mootings of the need for teaching ‘British values’ in our schools. We also heard that from 2019, there will be new infant league tables! It was also the month in which we heard that we would get a 1% pay increase this year – with some exceptions!

July
Where were you?
There can’t be many teaching colleagues who don’t recall the day that the news came that Gove was gone. As if July weren’t sweet enough in schools, the news led to shouts of joy, messages on staffroom whiteboards and even the occasional cake, I gather.
Interestingly, the department chose the same day to release the sample questions for the new KS1 and KS2 tests, perhaps in the hope we wouldn’t notice how mean they seemed!

August
Traditionally a quiet month for news, and not least in education. For some reason, this is when I discovered that the NAHT had quietly released its excellent model framework for assessment without levels.
The obligatory Ofsted changes were a more significant overhaul, including the scrapping of lesson observation grades.

September
The real new year began with a new curriculum, new SEND code of practice, the free school meals fiasco and plenty of other changes to contend with. Ofsted started launching no-notice inspections for a host of reasons, including not having the right information on your school website! It also became increasingly clear that the pace of change regarding the curriculum had been too quick even for the DfE, leading to several cock-ups.

October
An election must be looming. October saw the launch of the DfE’s Workload Challenge survey, and Ofsted released its clarification document (along with a consultation on more major changes to inspection). The DfE also announced the Early Years Pupil Premium.

November
The release of funding information showed that secondary school funding continues to be massively higher than primary funding. More pre-election promises included the continuation of higher funding for Primary Pupil Premium funding.

December
It’s hard to know what’s worst: the new-found need for teaching ‘character’ in school, or the need for government to get involved in the government-free College of Teaching, or the fact that this month brought yet another list of changes to inspection published on the last day of term. Roll on 2015 and an election year for more fun and games?

Perhaps we should be grateful that Ofsted at least thinks we’re not as bad as secondary schools.

When will someone at Ofsted say “Stop”?

Those of a broadly similar age to me may well remember the fake ads in the middle of episodes of the Fast Show.

Do you like cheese?
Do you like peas?
Then you’ll love… Cheesy Peas!

A classic case of having too much of a good thing – or at least, the wrong combinations of “good things”. The parallels with Ofsted may not be immediately clear – but let me eek out an analogy all the same.

Just this week on Twitter, @cazzypot shared her excellent blog on the latest nonsense of a tick-box for ‘British Values’. I asked the DfE to consider it as evidence for their Workload Challenge, which to their credit, they did. I did so, because it is yet another example of schools adding to workload and systems for the sake of evidence.

But how does this link to cheesy peas? Bear with me.

To be fair to the inspectorate, they are often not as responsible for ‘expecting’ schools to do things as some might think or claim. Indeed, they have gone so far as to release a clarification of what they don’t expect. But that will never be enough. Because all the time schools are being praised for what they do do, and criticized for what they don’t do, there is no incentive for schools to reduce requirements. Indeed, every time an Ofsted report praises something, it is likely that such a task or approach will be added to the workload of teachers in other nearby schools. And when they criticize another for failing to do something – lo and behold every other nearby school will add another new task to their list.

The problem is, like cheese and peas, simply adding more and more ‘Good Things’ doesn’t automatically produce a better outcome. Many schools are doing good things, and rightly that gets recognised. Many schools are wasting time doing pointless things: expecting detailed lesson plans, unwieldy evaluation pro formas, ridiculous pseudo-scientific ideas, and so on.  But until an Ofsted report ever points such things out as being unnecessary, or even burdensome, what incentive or direction is there for leadership teams to reduce the demands?

Of course, as I have said before, school leaders should take some of the blame. But the system doesn’t help them to differentiate between what is necessary, and what is gimmicky, but might garner a tick on the Ofsted form.

Just because a school where they happen to use cheese is doing well, and another when they happen to use peas is also doing well, doesn’t automatically imply that all schools ought to be using Cheesy Peas.

But who will be the first Ofsted inspector brave enough to tell a school to stop doing something?

The trouble with Ofsted and marking…

Alongside other news on education research in the press today, comes an article in the TES about marking. According to the TES blog, in it Alex Quigley (@HuntingEnglish) argues that we cannot wholly blame Ofsted for the current demands on workload of marking and feedback in schools. I’ll confess that I’ve not yet read the article in the paper, so I don’t intend to challenge this directly, but I do want to explain why I think Ofsted continues to be a driver of workload in this area, and perhaps how this reflects some of its deeper flaws.

Firstly, there can be no doubt that increasingly Ofsted reports have identified marking or feedback as an area for improvement in their recommendations. In fact, it’s quite hard to track down an Ofsted report which doesn’t recommend an improvement in marking and/or feedback, harder still to find one which praises quality of marking. Even among Outstanding school inspections, feedback on feedback is mixed at best. Of the 18 schools currently listed on Watchsted as having a recent Outstanding grade (including, therefore, Outstanding Teaching), just four list marking/feedback as a strength, with a fifth indicating that it is “not Outstanding”.

The limitations of Watchsted meant I could only look at the 10 most recent reports for other categories but in every case, all 10 examples showed that feedback was a recommendation, rather than a strength. It seems that even where schools are graded as Good or Outstanding, it’s difficult to get inspectors to praise marking.

One Outstanding school is hit with both praise and criticism on the matter:

Pupils are given clear guidance on how to improve their work or are set additional challenges in literacy and mathematics. This high quality feedback is not always evident in other subjects.

Ofsted report for Acresfield Community Primary School, Chester

The school is challenged to raise the standards of marking in other subjects to meet the high quality in the core areas.

Another school’s report, which praises the quality of marking in the recommendations, also contains a sting in its tail:

Marking, although not outstanding, promotes an increasingly consistent, and improving high-quality dialogue between teachers and pupils.

Ofsted report for Waddington All Saints Primary School, Lincoln

Later in the report comes that recommendation that the school “Accelerate pupils’ progress even more by ensuring that the marking of pupils’ work consistently promotes even higher quality dialogue between teachers and pupils in all classes.” And this is not an old report; the inspection took place this month!

Is it perhaps the case that marking and feedback has become the ‘go-to’ recommendation for inspectors when needing to justify an outcome, or to find a recommendation to make. Can it really be the case that only 4 schools of the last 200 primary and secondary schools inspected have sufficiently high quality marking and feedback to note it as a strength? Or that it is near impossible to find a school that doesn’t need to significantly improve its marking & feedback?

Here lies the problem with the recent clarification document from Ofsted: it’s all very well saying that inspectors won’t expect to see “unnecessary or extensive written dialogue”, but how does that sit with the recommendation that a school needs to promote “an increasingly consistent, and improving high-quality dialogue between teachers and pupils”. Where do we draw the line between the two?

The reality here lies perhaps somewhere deeper. Are we asking too much of our Ofsted teams? It’s very easy to spot that a school is not achieving results in line with predictions or expectations; its surely much harder to diagnose the causes and recommend a cure.

My own most recent experience of Ofsted was an inspection in which I recognised the outcomes (i.e. area grades) as accurate, but the recommendations as way off the mark. As has become commonplace, alongside our overall grade of Good, marking and feedback was raised as an area to improve, despite the fact that I – and colleagues – felt that other things should have been more pressing. Nevertheless, the nature of system demands that feedback then became a focus of the school, perhaps at the cost of other more important matters.

The problem is exacerbated for schools which are in need of improvement. The race to complete a report in 2 days doesn’t allow thorough diagnosis of the needs of the school, and even then the needs are seemingly reduced to a few bullet points. Any nuance or detail is lost, and it is left to a completely separate HMI to review progress against the targets set. And what better way to show an HMI that marking is improving than to ramp up the quantity?

In discussing this today, Bill Lord (@joga5) quite rightly pointed out that the EEF Toolkit emphasises feedback as one of the key areas to support progress (particularly in relation to Pupil Premium funding, one presumes), and yet even their page quite clearly states on the matter that:

Research suggests that [feedback] should be specific, accurate and clear; encourage and support further effort and be given sparingly so that it is meaningful; provide specific guidance on how to improve and not just tell students when they are wrong; and be supported with effective professional development for teachers.

In his article, Alex Quigley mentions “stories of teachers being forced to undertake weekly marking, regardless of the stage of learning or the usefulness of feedback“. In primary schools it is now common to expect that books are marked daily, and in many cases feedback given as often. The focus here is clearly on the expectations of Ofsted, rather than on the value of the process.

Alex Quigley might be right: we can’t blame Ofsted entirely for this; school leaders do need to take some responsibility and be brave enough to stand up to inspectors who get this wrong. But at the moment, the power is all rather on one side and the consequences fall rather heavily on the other.

It’s a brave school leader who sticks his head above the parapet.

What that Ofsted clarification should have said!

There was much to welcome in the recent note of clarification from Ofsted, and may it be publicised widely. However, to my mind there is still much that wasn’t said that ought to be. Of course, whether the chiefs at Ofsted agree with me remains to be seen.

Here’s what I’d have liked to have seen:

Teaching

  • Ofsted should not expect to see lessons differentiated a set number of ways. Inspectors are interested in whether or not the work is an appropriate challenge for all pupils; the number of groups within this will depend on the circumstances. Not all lessons require differentiation.
  • Ofsted should not expect to see children writing learning objectives. While it is often important that objectives are shared with children, nothing is added by forcing the copying of them at length.
  • Ofsted should not expect to see written evidence of all lessons in exercise books. Some lessons do not require written evidence; writing in learning objectives, or explanations of what was undertaken in a lesson is an unnecessary waste of time.

Marking & Target-setting

  • Ofsted should not expect to see evidence of marking of every piece of work. It is for schools to decide appropriate policies for marking and feedback, and the focus should be on impact, rather than evidence for outside bodies.
  • Ofsted should not expect to see written marking in the books of children for whom reading is at a very early stage. If it cannot directly impact on a child’s learning then it is time and effort poorly-spent.
  • Ofsted should not expect children to be able to recite their targets in every subject. While it is important that children know how to improve their work, there are many ways in which this can be achieved.
  • Ofsted should not expect children to know their ‘level’ in any subject.
  • Ofsted should not expect schools to update tracking data every six weeks (or other fixed interval). Tracking is not the same as assessment, and while on-going assessment is essential for effective teaching, tracking is only an administrative tool for leaders. Tracking should be as frequent as needed for effective leadership of the school, and no more frequent.

Classrooms

  • Ofsted should not expect to see identical consistency across all classrooms in a school. Departments and year teams quite rightly adapt school approaches to suit the needs of their subjects or pupils.
  • Ofsted should not expect pupils in measured groups to be identified in any way in the classroom. Students eligible for the Pupil Premium, or in local authority care should not be differentiated publicly.

It’s not an unreasonable list, is it? I will, naturally, waive all copyright demands should Ofsted wish to copy my ideas and add them to their document!

Whose data is it anyway?

I caused a bit of an upset today. As too easily happens, I saw a conversation via Twitter that raised concerns with me, and I rushed in with 140 characters of ill-thought-through response.

Some very knowledgeable experts in the field of school data management were trying – quite understandably – to get their heads round how a life after levels will look in terms of managing data and tracking in schools. As David Pott (@NoMoreLevels) put it: “trying to translate complex ideas into useable systems”.

My concern is that in too many cases, data experts are being forced to try to find their own way through all this, without the expert guidance of school leaders (or perhaps more importantly, system leaders) to highlight the pitfalls, and guide the direction of future developments. That’s not to say that the experts are working blind, but rather that they are being forced to try to work out a whole system of which they are only a part.

Of course, the problem is that without first-hand knowledge of some of those areas, the data experts are forced to rely on their knowledge of what went before. And as seems to be the case in so many situations at the moment, we run the risk of creating a system that simply mirrors the old one, flaws and all. We need to step back and look at the systems we actually need to help our schools to work better in the future. And as with all good design projects, it pays to consider the needs of the end user. Inevitably, with school data, there are always too many users!

Therefore, here is my attempt – very much from a primary perspective, although I daresay there are many parallels in secondary – to consider who the users are of data and tracking, and what their needs might be in our brave new world.

The Classroom Teacher

This is the person who should be at the centre of all discussions about data collection. If it doesn’t end up linking back to action in the classroom, then it is merely graph-plotters plotting graphs.

In the past, the sub-level has been the lot of the classroom teacher. Those meaningless subdivisions which tell us virtually nothing about the progress of students, but everything about the way in which data has come to drive the system.

As a classroom teacher, I need to know two things: which children in my class can do ‘X’, and which cannot? Everything else I deal with is about teaching and learning, be that curriculum, lesson planning, marking & feedback, everything. My involvement in the data system should be about assessment, not tracking. I have spoken many times about this: Tracking ≠ Assessment

Of course, at key points, my assessment should feed into the tracking system, otherwise we will find ourselves creating more work, but whether that be termly, half-termly or every fortnight, the collection of data for tracking should be based on my existing records for assessment, not in addition to it.

We have been fed a myth that teachers need to “know their data” to help their students make progress. This is, of course, nonsense. Knowing your data is meaningless if you don’t know the assessments that underpin it. Knowing that James is a 4b tells you nothing about what he needs to do to reach a 4a. A teacher needs to know their assessments: whether or not James knows his tables, or can carry out column subtraction, or understands how to use speech marks. None of this is encapsulated in the data; it is obscured by it.

My proposal is that classroom teachers use a Key Objectives model for assessing against specific objectives. Pleasingly, the NAHT appear to agree with me.

Students

Children do not need to know where they are on a relative scale compared to their peers, or to other schools nationally. What matters to children in classrooms is that they know what they can do, what they need to do next, and how to do that. All of that comes directly from teachers’ assessments, and should have no bearing on data and tracking (or perhaps, more importantly, the methods of tracking should have no bearing on a child’s understanding of their own attainment).

Too many schools have taken the message about students knowing where they are and what to do next as an indication that they should be told their sub-level. This doesn’t tell children anything about where they are, and much less about what to do next.

The School Leader

As a department, year team or senior leader, it is very rarely feasible for any one person to have a handle on the assessment outcomes for individual students; that is not their role.

This is the level at which regular tracking becomes important. It makes sense for a tracking system to highlight the numbers of children in any class group who are on-track – however that might be measured. It might also highlight those who are below expectations, those who are above, or those who have made slower progress. It should be possible, again, for all of this to come from the original assessments made by teachers in collated form.

For example, if using the Key Objectives approach, collation might indicate that in one class after half a term, 85% of students have achieved at least 20% of the key objectives, while a further 10% have achieved only 15% of the objectives, and some 5% are showing as achieving less than that. This would highlight the groups of children who are falling behind. It might be appropriate to “label” groups who are meeting, exceeding, or falling below the expected level but this is not a publication matter. It is for school tracking. There is nothing uncovered here that a classroom teacher doesn’t already know from his/her assessments. There is nothing demonstrated here that impacts on teaching and learning in classrooms. It may, however, highlight system concerns, for example where one class is underperforming, or where sub-groups such as those receiving the pupil premium are underperforming. Once these are identified, the focus should move back to the assessment.

In the past, the temptation was to highlight the percentage of children achieving, say L4, and to set then a target to increase that percentage, without any consideration of why those children were not yet achieving the level. All of these targets and statements must come back to the assessment and the classroom teacher.

Of course, senior leaders will also want to know the number of children who are “on-track” to meet end-of-key-stage expectations. Again, it should be possible to collate this based on the assessment processes undertaken in the classroom.

What is *not* required, is a new levelling system. There is no advantage to new labels to replace the old levels. There is no need for a “3b” or “3.5” or any other indicator to show that a student is working at the expected level for Year 3. Nobody needs this information. We have seen how meaningless such subdivisions become.

Of course, the devil is in the detail. What percentage of objectives would need to be met to consider a child to be “on track” or working at “age-related expectations”? Those are professional questions, and it is for that reason that it is all the more important that school and system leaders are driving these discussions, rather than waiting for data experts to provide ready-made solutions.

Ofsted

Frankly, we shouldn’t really need to consider Ofsted as a user of data, but the reality is that we currently still do. That said, their needs should be no different from school leaders. They will already have the headline data for end-of-key-stage assessments. All they should need to know from internal tracking and assessment is:

  1. Is the school appropriately assessing progress to further guide teaching and learning?
  2. Is the school appropriately tracking progress to identify students who need further support or challenge?

The details of the systems should be of no concern of Ofsted, so long as schools can satisfy those two needs. There should be no requirement to produce the data in any set form or at any specific frequency. The demands in the past that schools produce half-termly (or more frequent!) tracking spreadsheets of levels cannot be allowed to return under the new post-levels systems.

Parents

Parents were clearly always the lost party in the old system, and whether or not you agree with the DfE’s assessment that parents found levels confusing, the reality is that the old system was obscure at best. It told parents only where their child was in a rough approximation of comparison to other students. It gave no indication of the skills their child had, or their gaps in learning.

For the most part, the information a parent needs about their child’s learning is much the same as that that their child needs: the knowledge of what they can and can’t do, and what their next steps are. Of course, parents may be interested in a child’s attainment relative to his/her age, and that ought to be evident from the assessment. Equally, they may like to see how they have progressed, and again assessment against key objectives demonstrates that amply.

 

So where next?

We are fortunate in English schools to be supported by so many data experts with experience of the school system. However, they should not – indeed they must not  – be left to try to sort out this sorry mess alone. School leaders and school system leaders need to take a lead in this. Schools and their leaders need to take control of the professional discussions about what we measure when we’re assessing, and about what we consider to be appropriate attainment based on those assessments. Only then can the data experts who support our schools really create the systems we need to deliver on those intentions.