Category Archives: ofsted

Sins of the classroom revisited…

Personalisation and differentiation is the order of the day isn’t it?

Imagine a year five classroom, where the teacher has identified the progress every child has made, and tailored the lesson for each of them. When the main input begins, some are straight off to a task, while others stay on for extra support after the teaching. In the main session some are working in small groups, others work independently but on a common task, while some have a specific task just for them to meet their needs, whether more or less able. Some form of self-assessment lets the children identify their own success, and when children struggle they can call on professional support for near-immediate personalised feedback; when work is progressing well, a new challenge is set.

Sounds almost too good to be true doesn’t it? An outstanding lesson, maybe?

Peak MathsIt describes some of my year five maths lessons pretty well. But I don’t mean my, naturally, excellent teaching… I’m talking about me, as a second-year in middle school under the tutelage of Miss Ashworth. With the aid of Peak Maths. The form of self-assessment we used was checking the answer book. And the personalised feedback? We could line up at the desk for help when we were going wrong.

We wouldn’t countenance it, would we? It’d be lucky to scrape RI, let alone outstanding. Can you imagine offering such an experience up for your appraisal observation? Although I can’t quite put my finger on why.

Now, I’m not proposing an “it never did me any harm” line here. I was only 9 so maybe I missed some aspect that would make it unacceptable. But doesn’t it show something about where we’ve ended up? There are plenty of schools who would happily demand 5 different activities for different abilities, why not 6… Or 12? You could do that with Peak. The kids who got it could get on, those who needed more support got it. It sounds almost idyllic.

Of course, reality is that times change, and every system has its flaws. But is it just possible that there were some positives here?

Would a couple of minutes waiting for personalised help in a queue (a sight you’d never see in today’s Ofsted-ready classrooms) be a better use of time than children working on a low-level task to ensure that enough levels of differentiation were evident?

Would a bit of time working from a text book be more constructive than rolling dice to make questions, the answers to which your teacher won’t check until the next day?

Might the opportunity for immediate feedback for all help to reduce the marking workload of teachers while supporting the learning of children?

I’m not suggesting we all run every lesson like this – Miss Ashworth certainly didn’t. But might we do well to stop and question whether or not our “improvements” are quite as great as we imagine? Maybe the occasional queue at the teacher’s desk might not be such a bad thing?

Is Ofsted leading schools to mis-direct their energies?

There is much to be said for Ofsted’s willingness to change over recent years, and for its recognition of the limitations of its capability. Its decision to bring all inspectors in-house should probably be welcomed; its abandonment of lesson gradings has been widely praised… but is it actually achieving its purpose of raising standards?

As both inspections and reports become briefer, there is a risk that the guidance that schools are given on improvements, rather than raising standards may actually serve to distract a school from the work of improving its provision. After all, 10 hours is barely long enough to get any idea of what a school is like, let alone to accurately work out what it needs to do to improve. Yet, for some reason, inspection reports now insist on setting out what needs to be done.

This is a relatively recent phenomenon, and one that seems only to have arisen as inspections have shortened. Take one school as an example – a primary school in my hometown. When inspected in 2004 it was satisfactory, ten years later it requires improvement. Reading of the reports suggests that the reasons are similar in both cases: progress in core subjects was not good enough (and hence outcomes not high enough given the favoured intake).

In 2004 it was inspected by 5 inspectors over 3 days (15 inspector-days in total, still a reduction from earlier inspections); in 2014 it had just 3 inspectors for 2 days – less than half the time. In 2004, inspectors limited themselves to indicating what needed to be improved, based on its more thorough inspection: it was for the governors (supported by the professionals who knew the school well) to set out a plan of how this was to be achieved:


Compare this to the 2014 inspection, where after just 6 inspector-days of work it seems that Ofsted feels that it can tell exactly what needs to be done:


Notice that the essential problem was the same: children were deemed to be making insufficient progress from their starting points. In the former case, it was for the school to set about improving that: Ofsted merely reported what it found. By 2014 Ofsted seems to see its role as directing those improvements.

This is almost certainly an understandable reaction to claims that Ofsted merely sat in judgement and failed to support schools to improve. However, does this really achieve that?

It strikes me that if children are not making enough progress during their primary years then the issues may well run deeper than making sure they’ve understood tasks in lessons and responding to marking. In fact, I’d argue that the first bullet point would be a ridiculous claim to make on the basis of a few lesson observations over 2 days. But isn’t that exactly the problem? That’s all the inspectors had to go on.

And so, no doubt, that school will now be investing its time and efforts into the bullet points put forward by Ofsted. When inspectors next return, tasks will be well-explained (although not necessarily well-chosen or used), mini-plenaries will abound to check that children know what they’re doing (although not necessarily learning), a new marking policy will have been developed (with the resulting dialogue, despite the recent clarification) and leaders will be checking on the quality of teaching and learning… by checking that tasks are being explained and mini-plenaries used.

Nowhere is there any advice that the school might look at the quality of its curriculum provision, or evaluate the relative strengths and weaknesses of its teaching and set out a plan accordingly. No: Ofsted has made its judgements on the basis of a few drop-ins, and that will now direct the school’s efforts for the next 2 years.

The fact is, two days is not long enough for an inspection team to ascertain what needs to be done to improve provision in a school. If it were, being a headteacher would be easy; consultants would be redundant; school improvement would be a picnic. By imagining that an inspection team have the knowledge or understanding of a school’s situation to effect improvements, we are being fooled. And by letting them dictate the direction of school improvement, how much time is being wasted in schools up and down the country in making changes to meet the bullet points, rather than to improve provision?

Increasingly it is becoming clear that flying inspection visits are not adequate for the real detail of school improvement; they can provide but a snapshot – even over a week. That’s not to say that the snapshot might not be useful; merely to note that an identification of the issues is not necessarily enough to propose a cure.

Maybe a medical model is worth considering? Inspectors can do a fair job as General Practitioners: brief check-ups and dealing with minor ailments, but where a school really needs improvement, perhaps it should be referred to the appropriate specialist for further examination and treatment. Otherwise we risk simply issuing the same simplistic treatments to everyone for everything.

Doubtless in many other schools there are teachers who know that they’re focussing on the wrong things because of Ofsted ‘bullet points’ – I’d welcome your comments telling me about them (anonymous comments welcome)

When will someone at Ofsted say “Stop”?

Those of a broadly similar age to me may well remember the fake ads in the middle of episodes of the Fast Show.

Do you like cheese?
Do you like peas?
Then you’ll love… Cheesy Peas!

A classic case of having too much of a good thing – or at least, the wrong combinations of “good things”. The parallels with Ofsted may not be immediately clear – but let me eek out an analogy all the same.

Just this week on Twitter, @cazzypot shared her excellent blog on the latest nonsense of a tick-box for ‘British Values’. I asked the DfE to consider it as evidence for their Workload Challenge, which to their credit, they did. I did so, because it is yet another example of schools adding to workload and systems for the sake of evidence.

But how does this link to cheesy peas? Bear with me.

To be fair to the inspectorate, they are often not as responsible for ‘expecting’ schools to do things as some might think or claim. Indeed, they have gone so far as to release a clarification of what they don’t expect. But that will never be enough. Because all the time schools are being praised for what they do do, and criticized for what they don’t do, there is no incentive for schools to reduce requirements. Indeed, every time an Ofsted report praises something, it is likely that such a task or approach will be added to the workload of teachers in other nearby schools. And when they criticize another for failing to do something – lo and behold every other nearby school will add another new task to their list.

The problem is, like cheese and peas, simply adding more and more ‘Good Things’ doesn’t automatically produce a better outcome. Many schools are doing good things, and rightly that gets recognised. Many schools are wasting time doing pointless things: expecting detailed lesson plans, unwieldy evaluation pro formas, ridiculous pseudo-scientific ideas, and so on.  But until an Ofsted report ever points such things out as being unnecessary, or even burdensome, what incentive or direction is there for leadership teams to reduce the demands?

Of course, as I have said before, school leaders should take some of the blame. But the system doesn’t help them to differentiate between what is necessary, and what is gimmicky, but might garner a tick on the Ofsted form.

Just because a school where they happen to use cheese is doing well, and another when they happen to use peas is also doing well, doesn’t automatically imply that all schools ought to be using Cheesy Peas.

But who will be the first Ofsted inspector brave enough to tell a school to stop doing something?

The trouble with Ofsted and marking…

Alongside other news on education research in the press today, comes an article in the TES about marking. According to the TES blog, in it Alex Quigley (@HuntingEnglish) argues that we cannot wholly blame Ofsted for the current demands on workload of marking and feedback in schools. I’ll confess that I’ve not yet read the article in the paper, so I don’t intend to challenge this directly, but I do want to explain why I think Ofsted continues to be a driver of workload in this area, and perhaps how this reflects some of its deeper flaws.

Firstly, there can be no doubt that increasingly Ofsted reports have identified marking or feedback as an area for improvement in their recommendations. In fact, it’s quite hard to track down an Ofsted report which doesn’t recommend an improvement in marking and/or feedback, harder still to find one which praises quality of marking. Even among Outstanding school inspections, feedback on feedback is mixed at best. Of the 18 schools currently listed on Watchsted as having a recent Outstanding grade (including, therefore, Outstanding Teaching), just four list marking/feedback as a strength, with a fifth indicating that it is “not Outstanding”.

The limitations of Watchsted meant I could only look at the 10 most recent reports for other categories but in every case, all 10 examples showed that feedback was a recommendation, rather than a strength. It seems that even where schools are graded as Good or Outstanding, it’s difficult to get inspectors to praise marking.

One Outstanding school is hit with both praise and criticism on the matter:

Pupils are given clear guidance on how to improve their work or are set additional challenges in literacy and mathematics. This high quality feedback is not always evident in other subjects.

Ofsted report for Acresfield Community Primary School, Chester

The school is challenged to raise the standards of marking in other subjects to meet the high quality in the core areas.

Another school’s report, which praises the quality of marking in the recommendations, also contains a sting in its tail:

Marking, although not outstanding, promotes an increasingly consistent, and improving high-quality dialogue between teachers and pupils.

Ofsted report for Waddington All Saints Primary School, Lincoln

Later in the report comes that recommendation that the school “Accelerate pupils’ progress even more by ensuring that the marking of pupils’ work consistently promotes even higher quality dialogue between teachers and pupils in all classes.” And this is not an old report; the inspection took place this month!

Is it perhaps the case that marking and feedback has become the ‘go-to’ recommendation for inspectors when needing to justify an outcome, or to find a recommendation to make. Can it really be the case that only 4 schools of the last 200 primary and secondary schools inspected have sufficiently high quality marking and feedback to note it as a strength? Or that it is near impossible to find a school that doesn’t need to significantly improve its marking & feedback?

Here lies the problem with the recent clarification document from Ofsted: it’s all very well saying that inspectors won’t expect to see “unnecessary or extensive written dialogue”, but how does that sit with the recommendation that a school needs to promote “an increasingly consistent, and improving high-quality dialogue between teachers and pupils”. Where do we draw the line between the two?

The reality here lies perhaps somewhere deeper. Are we asking too much of our Ofsted teams? It’s very easy to spot that a school is not achieving results in line with predictions or expectations; its surely much harder to diagnose the causes and recommend a cure.

My own most recent experience of Ofsted was an inspection in which I recognised the outcomes (i.e. area grades) as accurate, but the recommendations as way off the mark. As has become commonplace, alongside our overall grade of Good, marking and feedback was raised as an area to improve, despite the fact that I – and colleagues – felt that other things should have been more pressing. Nevertheless, the nature of system demands that feedback then became a focus of the school, perhaps at the cost of other more important matters.

The problem is exacerbated for schools which are in need of improvement. The race to complete a report in 2 days doesn’t allow thorough diagnosis of the needs of the school, and even then the needs are seemingly reduced to a few bullet points. Any nuance or detail is lost, and it is left to a completely separate HMI to review progress against the targets set. And what better way to show an HMI that marking is improving than to ramp up the quantity?

In discussing this today, Bill Lord (@joga5) quite rightly pointed out that the EEF Toolkit emphasises feedback as one of the key areas to support progress (particularly in relation to Pupil Premium funding, one presumes), and yet even their page quite clearly states on the matter that:

Research suggests that [feedback] should be specific, accurate and clear; encourage and support further effort and be given sparingly so that it is meaningful; provide specific guidance on how to improve and not just tell students when they are wrong; and be supported with effective professional development for teachers.

In his article, Alex Quigley mentions “stories of teachers being forced to undertake weekly marking, regardless of the stage of learning or the usefulness of feedback“. In primary schools it is now common to expect that books are marked daily, and in many cases feedback given as often. The focus here is clearly on the expectations of Ofsted, rather than on the value of the process.

Alex Quigley might be right: we can’t blame Ofsted entirely for this; school leaders do need to take some responsibility and be brave enough to stand up to inspectors who get this wrong. But at the moment, the power is all rather on one side and the consequences fall rather heavily on the other.

It’s a brave school leader who sticks his head above the parapet.

What that Ofsted clarification should have said!

There was much to welcome in the recent note of clarification from Ofsted, and may it be publicised widely. However, to my mind there is still much that wasn’t said that ought to be. Of course, whether the chiefs at Ofsted agree with me remains to be seen.

Here’s what I’d have liked to have seen:


  • Ofsted should not expect to see lessons differentiated a set number of ways. Inspectors are interested in whether or not the work is an appropriate challenge for all pupils; the number of groups within this will depend on the circumstances. Not all lessons require differentiation.
  • Ofsted should not expect to see children writing learning objectives. While it is often important that objectives are shared with children, nothing is added by forcing the copying of them at length.
  • Ofsted should not expect to see written evidence of all lessons in exercise books. Some lessons do not require written evidence; writing in learning objectives, or explanations of what was undertaken in a lesson is an unnecessary waste of time.

Marking & Target-setting

  • Ofsted should not expect to see evidence of marking of every piece of work. It is for schools to decide appropriate policies for marking and feedback, and the focus should be on impact, rather than evidence for outside bodies.
  • Ofsted should not expect to see written marking in the books of children for whom reading is at a very early stage. If it cannot directly impact on a child’s learning then it is time and effort poorly-spent.
  • Ofsted should not expect children to be able to recite their targets in every subject. While it is important that children know how to improve their work, there are many ways in which this can be achieved.
  • Ofsted should not expect children to know their ‘level’ in any subject.
  • Ofsted should not expect schools to update tracking data every six weeks (or other fixed interval). Tracking is not the same as assessment, and while on-going assessment is essential for effective teaching, tracking is only an administrative tool for leaders. Tracking should be as frequent as needed for effective leadership of the school, and no more frequent.


  • Ofsted should not expect to see identical consistency across all classrooms in a school. Departments and year teams quite rightly adapt school approaches to suit the needs of their subjects or pupils.
  • Ofsted should not expect pupils in measured groups to be identified in any way in the classroom. Students eligible for the Pupil Premium, or in local authority care should not be differentiated publicly.

It’s not an unreasonable list, is it? I will, naturally, waive all copyright demands should Ofsted wish to copy my ideas and add them to their document!

What purpose marking?


Marking – for the love of it?

I had a conversation with Mark Gilbranch (@mgilbranch) today about book scrutinies, and particularly considering the approach to monitoring marking in school. It brought to the fore, in my mind, some of the many issues with marking policies in schools, and particularly the problems with the ways in which they are both implemented and monitored – including by Ofsted!

When I commented over the weekend that I’d happily always plan and never mark, several people commented that they thought marking was an integral part of planning. I’d disagree. I’m not arguing that marking is pointless, but rather that it is not the act of marking work that helps me to know where to go next; it is merely the act of reviewing it. The actual marking should be creating dialogue with students, to allow them to make next steps without my direct presence.

And here lies the rub. Marking isn’t for the teacher, ever. And so we confuse ‘marking’ and ‘feedback’ at a cost. Some of the most important feedback that comes from reviewing work is not in the written comments, or even in the verbal feedback given to students. The most significant feedback from reviewing work should be to the teacher, indicating to him/her where the teaching ought to go next.

Critically, marking policies often overlook this vital element – even when marking policies are renamed feedback policies. The focus is always on the approaches for giving written comments (or verbal) to students. And while this is undoubtedly an important part of the work of feedback, it isn’t the most important.

Many policies now emphasise the need to give children opportunities to follow-up on marking comments – and rightly so. But that isn’t always the most important part of the process either. Sometimes a piece of work shows that more drastic intervention is required, either individually or as part of a class. Sometimes the work is completed to such a high standard that a new challenge needs to be offered than can only be delivered in person, or as part of a group in the follow-up lesson. Sometimes the feedback a teacher garners from a selection of books is entirely unrelated to the learning objective of that lesson, but highlights an unconnected issue. In all these cases, a comment – in whatever colour pen the policy dictates – won’t achieve what is really needed. In these cases the feedback to the teacher, providing indications of where to take the teaching next will be far more important than any cursory work a child could do in response to the mighty red pen.

But if policies don’t recognise this – and many don’t – then how much energy will be expended by both teachers and students on evidencing marking and responding to marking in order to demonstrate that the policy is being implemented, at the cost of real learning opportunities in the next lesson.

Re-naming marking policies as feedback policies isn’t enough. We need to be explicit in the aims of our marking & feedback policies (and, yes, they should have aims!) that feedback is provided both to teachers and students through the reviewing of work completed, and that the professional judgement of the teacher should guide the response, which may be individual comments, may be group interventions, or may be whole-class teaching to tackle a wider misconception. Not all of these can be evidenced in red pen and follow-ups, and nor should they be.

It means that when scrutinising marking – as Mark Gilbranch was talking about – we need to be explicit about what is being looked for. In some cases it may be appropriate highlighting; in others it will be specific red pen comments; in others it will be action taken by students. But most importantly, we should be looking for evidence that an intervention by the teacher, based on the review of the work completed, has had a formative and positive impact on learning. And that might not be so easy to spot – especially to an Ofsted inspector taking a quick flick through the books. We need to be clear in our policies about our approaches, and ready to demonstrate their effectiveness to all comers.

Marking is an essential part of the job… but it shouldn’t be so essential as to get in the way of teaching and learning.

Free Schools, Ofsted and Twitter (The Good, the Bad and the Ugly*)

*not necessarily in that order

Talk on Twitter tonight is of the newly-released Ofsted report which indicates that Greenwich Free School requires improvement. I don’t know the school at all, and don’t, therefore, intend to argue the rights and wrongs of the situation. Nevertheless, a few things spring to mind.

1. Free Schools have a tough audience

This is not their fault. Unfortunately, the way in which the Secretary of State for Education and his colleagues spoke about Free Schools before they were even up-and-running implied that they would, by the very virtue of their existence, be better than “ordinary” schools, raising standards all round and suchlike. Unfortunately, this inevitably upset and alienated may in the state sector who interpreted as a denigration of the work they did.

Reality has, rather unsurprisingly, indicated that free schools are – like all schools – liable to come in all forms and have all manner of amounts of success. The unfortunate consequence of the government’s claims for its schools is that any indication of this normality is ceased upon by opponents. It isn’t fair, but I’m afraid the blame lies squarely in the government’s court on this one. They started it.

Some of the ‘gloating’ that has been described on Twitter is a shame, but it is also wholly predictable. Many of those people will only see their shouts in the calling out the Emperor as he stands in his “new clothes”.

2. Internal data is always tricky

Many of those who support the work of the GFS are keen to point out the challenges presented by having only two year groups in school, and a lack, therefore, of any external data. I’m afraid my sympathy here is limited. I am a middle-school teacher by training (and heart), and so have only ever worked in schools where internal data has been key in determining progress and outcomes, and where Ofsted judgements could depend heavily on an inspector’s interpretation (or even notice taken) of that data.

There are middle schools in the country which are judged on data from KS2 tests after they’ve had their children for just over 2 terms. Everything for their remaining 6-10 terms is necessarily internal. It means middle school leaders have to work hard to ensure that their data is reliable. It means the National Middle Schools’ Forum has had to collate its own data to support schools about outcomes. It means that as a leader I scour all manner of sources to desperately try to find data against which we could reliably compare our school. It means I sought out supporting evidence from partner schools about moderation and other work we’d done to demonstrate the robustness of our internal systems when Ofsted came to call.

So it’s quite possible that GFS were caused unreasonable damage because of the lack of national systems to account for schools that only go up to Y8. But it certainly isn’t the first: every middle school in the land faces that battle.

3. Year 7 (and 8) data is tricky too (with or without levels)

One of the documents I picked apart as a middle school middle-leader was the thrillingly-entitled DfE Research Report “How do pupils
progress during Key Stages 2 and 3?” (DFE-RR096 if you’re interested). When final outcomes for a school are your Y7 pupils, then national comparisons are hard. There is lots of evidence about a ‘Year 7 dip’, but much less detail about how it plays out in schools and classrooms. But those comparisons were vital for us. It was essential that I knew that progress is significantly more ‘dippy’ in Reading than Writing or Maths. I had to scan every table and chart trying to interpret data in ways that were meaningful for comparison within just KS3. I also spent a great deal of time looking at assessment structures, discussing with other schools and finding out as much as I could about the progress children make in reality during Year 7, as opposed to the straight-line imagined from KS2 to KS4.

Whether GFS had used National Curriculum levels or not, the challenge for any school using non-standard outcome points (i.e. not KS1, KS2 or KS4) is to be able to know the story of your students *and* to know the comparison with others nationally. It’s much harder than the (relatively) simple task of comparing national results, but it is not less important. Perhaps it is even more so?

4. Playing the long game can backfire short-term (Be warned about PRP!)

Perhaps it’s inevitable that a new school looks to build itself over the longer term. Perhaps at the start of the GFS the focus was so much on setting the groundwork for outstanding learning and progress over the five-year period up to GCSE and beyond, that some short-term actions didn’t necessarily lead to short-term gains. There are plenty of examples of things teachers and schools can do to boost their results in the short-term, that don’t necessarily pay off over a period. Equally, there are good actions that could be taken for which rewards might not be reaped for some years. Maybe when their first cohort reaches GCSE, the evidence will show that the judgements made were right, and Ofsted’s interpretation was wrong. Perhaps that should be a warning to all of us of the risks of performance-related pay amongst other things?

5. Some things are universal

Greenwich will not be the only school to have had progress of particular groups highlighted as an issue. In this case it seems both internal data and the Ofsted judgement identify weaknesses in progress for various groups. The most recent frameworks have been very hot on this, and all schools – no matter how small their cohorts – face the same challenges. It doesn’t make this judgement any more or less fair than any other. It’s just the nature of the beast at the moment and isn’t unique to (or absent from) free schools.

6. Parental support counts for a lot

This is generally true in any case, but perhaps particularly when Ofsted come calling. If the parents are supportive of the direction of the school, then an RI judgement will be far less problematic than otherwise. Many of those parents will consider that Ofsted has its own failings and will continue to work with the school. If parents feel that Ofsted has confirmed their fears about a school and its leaders failing, then you’ve really got your work cut out.

7. All in all, a school’s a school

This is just a personal opinion, but no label, no status, no structure, no leader even, is enough on its own to create an exceptional school. And it’s even harder to do so overnight. Time will tell, but it’s quite clear that free schools are not some panacea to the problems of state education. They’re just schools.


More haste: more chaos!

I am finding it increasingly frustrating lately that so much of what is being changed in education at the moment is being rushed. It is too easy, sometimes, to complain about the politicisation of education, but these matters cannot be ignored: the haste with which policies are being changed is leading to confusion and disruption in the education system beyond that which is necessary.

Inevitably all change brings some upsets, but perhaps the worst risk of change is that a change – no matter how positive in theory – becomes a negative in and of itself because of the way it is driven.

And the examples are becoming plentiful.

Back last summer I pointed out the looming issue of children being taught and tested on different curricula because of the rush to push through the new National Curriculum.

Just this week, as I was working on some progression materials for the new primary curriculum, I stumbled across these two consecutive objectives from the new Programme of Study for Years 5 and 6 in English:

y56spellingSurely any amount of simple proof-reading or editing would have picked up that these two objectives, while very differently-worded, have essentially the same meaning?

Alongside the curriculum changes, we have proposed changes to assessment at the end of the Key Stage. Schools have been tasked with the creation of curriculum and assessment frameworks to meet the new requirements by September. However, as the NAHT report released a week ago points out, that leaves very little time for schools. That’s all the more significant given that the DfE has now had 4 months to respond to the consultation on primary assessment and has failed to do so. If the department, which presumably has staff specifically tasked with such things, cannot manage such speed, on what grounds does it expect schools to do so?

But the rush is not limited to the implementation of the new curriculum.

Blogger Andy Jolley has been relentless in his efforts (via his excellent blog) to hold the department (and the Deputy PM) to account for their actions in implementing the proposed free schools meals for infants programme. It has been dogged with alterations (the “hot” seems to have disappeared) and rushed decisions. That was further highlighted today when he asked a question of the department:

Once again, it is clear that in the rush to be seen to implement a politically-decided policy, the department itself cannot keep up with the pace required. Forced to rush through arrangements for academies to apply for funding, and yet unable to indicate to schools – who will be in the throes of preparing budgets – exactly how the funding will work for this hasty plan!

It seems that the haste extends beyonds the bounds of the department, too. Much has been said in the last couple of days about the meeting of some well-known bloggers with officials from Ofsted. As David Didau posted in his blog on the meeting:

On the subject of lesson grading, he said, boldly, categorically and unequivocally that inspectors should not be grading individual lessons, and they should not be arriving at a judgment for teaching and learning by aggregating lesson grades.

At first this seems to have been music to the ears of many: official word that lesson grading shouldn’t be happening in inspections, and the implication that schools should cease the practice immediately.

Except the words of the official from that meeting don’t seem to match with the evidence from the documentation that guides inspectors in their work.

It is true that the handbook for school inspection is clear that aggregated grades from lesson observations should not be used to reach the overall teaching judgement:


However, it is clearly noteworthy that the implication is that such grades would exist. This implication is further supported by a statement earlier in the handbook about reaching such judgements:

handbook31Once again, the guidance is clear that grades of some sort will be recorded to support judgements. The note that indicates that short observations might not be graded clearly implies that longer observations (i.e. those of 25 minutes or more) will be.

The implication is further supported in the subsidiary guidance:

guidance66Once again, the suggestion is clear that grades should be shared, and the statement is repeated in paragraph 67 that such grades should not be aggregated. (Once again, the spectre of poor proof-reading appears to raise its head in paragraph 67, too!)

It seems that the rush for change – even if it is supported by many teachers – seems to be causing confusion. How are Additional Inspectors in schools meant to act? In accordance with the guidance in writing, or the broader messages that seem to be emanating from the centre?

And so continues the same problem. This isn’t an argument for the de-politicisation of education. Far from it. However, it is a plea for a little less haste in making change: it’s clearly becoming unmanageable!

If Ofsted inspected government departments…?

Not entirely serious. Probably.

DfE Ofsted Report?