Monthly Archives: May 2016

Our Ofsted experience

I’m reliably assured that mentioning Ofsted is bound to get a spike in visits to one’s blog page, so let’s see.

About a month ago, we were thrilled to receive that lunchtime phone call that meant the wait was finally over. As any school with a ‘Requires Improvement’ label (or worse) will know, although perhaps never quite ‘welcome’, there comes a point where the Ofsted call is desired, if only to end the waiting. We wanted to get rid of the label, and so this was our chance.

We’d been “due” for a few months, but knew that it could be as late as the summer, so in the end, the second week after Easter didn’t seem so bad (particularly as it left us with a long weekend in the aftermath).OFSTED_good_logo3

So how did it go? Well, for those of you interested in grades, I am now the deputy headteacher of an officially GOOD school. It’s funny how that matters. Six weeks ago, I was just deputy of an unofficially good one.

But those of you still awaiting the call will be more interested in the process than the outcome, so let me start by saying that having spent the past 18 months building up my collection of “But Sean Harford says…” comments, I didn’t have to call upon it once. The team who visited us were exemplary in their execution of the process according to the new guidance and myth-busting in the handbook.

In the conversation on the day of the phone call, we covered practicalities, and provided some additional details to the lead inspector: timetables, a copy of our latest SEF (4 pages of brief notes – not War and Peace) and the like. And then we set about preparing. We had only just that week been collating teachers’ judgements of children’s current attainment into a new MIS, so it was a good opportunity for us to find out how it worked in practice!

We don’t keep reams of data, we don’t use “points of progress”, and we’ve gone to some length to avoid recreating levels. All for good reasons, but always aware that a ‘rogue’ team could find it hard to make snap judgements, and so make bad ones. The data we provided to the team was simple: proportions of each children in each year group who teachers considered were “on track” to meet, or exceed, end-of-Key-Stage expectations. We compared some key groups (gender, Pupil Premium, SEN) and that’s it. It could all fit on a piece of A4. So when it came to the inspection itself, there was a risk.

Day One

It may be a cliché to say it, but the inspection was definitely done with rather than to us. The first day included joint observations and feedback with the headteacher, as well as separate observations (we had a 3-person team). An inspector met with the SENCo, and the lead also met with English and Maths subject leaders (the former of which happens to be me!) and our EYFS leader.

The main question we were asked as subject leaders was entirely sensible and reasonable: what had we done to improve our subjects in the school? I think we both managed to answer the “why?” and “what impact?” in our responses, so further detail wasn’t sought there, but it was clear that impact was key.

Book Scrutiny

The afternoon of the first day was given over to book scrutiny. We provided books from across the ability range in the core subjects, as well as ‘theme’ books for each team. The scrutiny focused most closely on Years 2, 4 and 6, which fits both with the way we structure our classes and our curriculum and assessment approach. Alongside books, we provided print-outs for some children that showed our judgements on our internal tracking system. I’m not sure whether the focus was set out as clearly as this, but my perception of the scrutiny (with which both my headteacher and I were involved) was that the team were looking at:

  • Was the work of an appropriate standard for the age of the children? (including content, presentation, etc.)
  • Was there marking that was in line with the school’s policy? (one inspector described our marking – positively – as “no frills”, which I quite liked)
  • Was there evidence that children were making progress at an appropriate rate for their starting points?

They asked for the feedback policy in advance, and made connection to it briefly, but the focus on marking was mainly on checking that it met what we said we did, and that where it was used, it helped lead to progress. Some pages in books were unmarked. Some comments were brief. Not all had direct responses – but there was evidence that feedback was supporting progression.

Being involved in the process meant that we could provide context (‘Yes, this piece does look amazing but was quite heavily structured; here’s the independent follow-up’; ‘Yes, there is a heavy focus on number, but that’s how our curriculum is deliberately structured’, etc.). But it also meant a lot of awkward watching and wondering  – particularly when one inspector was looking closely at the books from my class!

The meeting at the end of the first day was a reasoned wander through the framework to identify where judgements were heading and what additional information might be needed. We were aware of one lower-attaining cohort, which was identified, so offered some further evidence from their peers to support our judgements. There was more teaching to be seen to complete the evidence needed for that. And there was one important question about assessment.

Assessment without levels

I had expected it. Assessment is so much more difficult for inspectors to keep on top of in the new world, and so I fully expected to have to explain things in more detail than in the past. But I was also slightly fearful of how it might be received. I needn’t have been this time. The question was perfectly sensible: our key metric is about children being “on track”, so how do we ensure that those who are not on-track (and not even close) are also making good progress?

That’s a good question; indeed it might even have been remiss not to have asked it! We were happily able to provide examples of books for specific children, along with our assessments recorded in our tracker to show exactly what they were able to do now that they couldn’t do at the end of last academic year. It gave a good opportunity to show how we focus classroom assessment on what children can and can’t do and adapt our teaching accordingly; far more important than the big picture figures.

Day Two

On the second day I observed a teacher alongside the lead inspector, and was again pleased by the experience. Like all lessons, not everything when perfectly to plan, but when I reported my thoughts afterwards, we had a sensible discussion about the intentions of the lesson and what had been achieved, recognising that the deviation from the initial plan was good and proper in the circumstance. There was no sense of inspectors trying to catch anyone out.

Many of the other activities were as you’d expect: conversations with children and listening to readers (neither of which we were involved in, but I presume they acquitted themselves well); meeting with a group of governors (which I also wasn’t involved in, but they seem to acquit themselves well too J); a conversation about SMSC and British Values (with a brief tour to look at examples of evidence around the school); watching assembly, etc.

Then, on the afternoon of day two we sat with the inspection team as they went through their deliberation about the final judgements. In some ways it’s both fascinating and torturous to be a witness in the process – but surely better than the alternative of not being!

As with any good outcome, we got the result we felt we were due (and deserved), and areas for feedback that aligned with what was already identified on our development plan for the forthcoming year. The feedback was constructive, formative, and didn’t attempt to solve problems that didn’t exist.

And then we went to the pub!

A policy for feedback, not marking

Less than two years ago I worked with a colleague on an update of our marking policy. Part of the change was a shift to calling it a feedback policy.

Unfortunately, what we did was gave it “Feedback policy” as a title, and then wrote a marking policy. Old habits die hard. But this is one that I’m determined to kill off. So over the past term and a half I have worked with staff across my school (mainly those who are full-time classroom teachers) to develop a new approach which is properly rooted in Feedback.

In doing so, I had a few aims, but the most pressing was to move the shift from focussing on marking for evidence, to a policy which identified evidence of feedback. (There is, after all, a reality that someone will want to scrutinise it at some point!) Part of the reason for that was my determination to try to reduce marking workload. To allow us all to Do Less, But Better.

As yet the policy has not been finalised and approved by governors, but as I have spoken about it, and many have asked about it, I thought it would be useful to set out some key points here.

Key Principles

The policy deliberately starts from some key evidence drawn from the EEF toolkit summary of research into Feedback:

section1.png

It’s notable that none of this requires written marking. Therefore, upon this evidence is built our outline of the key principles that underpin the policy. I would argue that these are the most important elements for teachers:

section2.png

Perhaps the first bullet point is the most important. I have had endless conversations with teachers who tell me that they are marking for someone other than the children. We want to put a stop to that.

The fifth point is also significant: perhaps the most valuable feedback that happens in schools has nothing to do with marking: it is the feedback a teacher gathers as a lesson progresses. That is where real immediate action can have immediate impact.

Feedback in Practice

Building on the work of the Assessment Commission, we have set out how feedback is given in three ways (in order of decreasing importance):

  1. Immediate feedback – at the point of teaching
  2. Summary feedback – at the end of a lesson/task
  3. Review feedback – away from the point of teaching (including written comments)

Again, it’s deliberately written to imply that written marking should be an approach of last resort. Often other methods are more appropriate, whether that be individual pointers in the lesson, follow-up tasks, or lesson adaptation based on reviews of work.

I’ve written before about the law of diminishing returns when it comes to marking books. Put simply, the most valuable feedback that comes from marking a book occurs in the first few seconds of looking at it. Teachers can make a lot of more use of that quick feedback than children ever will of written comments.

Consequently, our policy deliberately aims to give teachers the room to use the most effective forms of feedback, without insisting on the demands of written marking where it is unnecessary.

What about evidence?

The main reason most schools seem reluctant to move away from written marking is the need for evidence. How will Ofsted / the Local Authority / the RSC know that we’re giving good feedback, if it’s not written in the book?

The answer to this is obvious really, when we think about it. How does any school leader know where good effective teaching habits are being used? They watch the teaching.

So rather than trying to make our approach fit the need for evidence, we’ve taken the opposite approach: we’ll use the best methods available to us, and signpost inspectors and others to the evidence they will find (which often won’t be written):

section3.png

We’ve tried to make it obvious. We could have gone simpler: I was tempted simply to write “Go and look in the classroom!”, but I think we’ve found something a little clearer.

My hope is that with this clarity, teachers will feel able to lay off the red pen a bit, and start instead making decisions about how best to spend their time. Often, in my view, a cursory glance at the books, followed by a re-think of the next lesson is far more effective than any amount of comments. Even better if that re-think can be part of a collaborative discussion with other teachers in the team.

Other details

There’s more to the policy than these broad strokes. We do still have a written marking approach, and will still make use of highlighters to pick out key points and stampers to indicate common messages (particularly in KS1). We will still set targets based on our Key Objective framework.

But hopefully, we will also see teaching teams using more of their time next year to collaborate on planning the most effective lessons, and finding common solutions to common problems. And they may even get an evening or two back to spend with their families.

Do Less, But Better

Thoughts on the latest fiasco

oopsgpsIf I’m honest, I feel a bit sorry for the DfE today. But Nick Gibb did his best to temper any sympathy I felt. So here are a few thoughts on the latest in what seems to have been a long run of cock-ups – the accidental release of GPS test papers and mark schemes to markers a day early.

The DfE are off the hook (well, almost)

It seems that this particular mistake was entirely the fault of the private contractor, Pearson. The department has outsourced the marking arrangements, fairly reasonably, and the organisation has let them down. Notably, the Chief Executive of that company admitted their mistakes immediately. Perhaps something the minster could learn from here?

Tests weren’t compromised

In reality, fewer than 100 people actually accessed the tests, all of whom were under contractual obligation to keep confidential any knowledge they acquired in the course of their work. Nick Gibb was right to say that many markers already have such access and have to be trusted to keep it to themselves. I see no evidence that a test paper was actually shared, so no reason to cancel the tests particularly.

No evidence of a ‘rogue’ marker

I also see no evidence that any test paper was “passed to a journalist”. The fact that a journalist came to know of the error is not the same thing. As yet, we don’t know how that came about, and so Nick Gibb had no business making such claims. This smacks of desperation, and as Tony Parkin commented on the Schools Week article earlier today, it may be that had the marker not alerted the press, that we would never have known about it. Personally, I prefer that the department be held to account, particularly given its wholly incompetent handling of the whole assessment debacle elsewhere. If the marker who shared the information had really intended to undermine the tests, it could far more efficiently have been done by many other means that reporting to a reputable journalist.

Sources must be protected

Journalists have every right (and hopefully ever intention) to protect their sources. The 93 markers who downloaded the document should not be harassed our accused in any way. For a start, at least 92 of them have done nothing wrong, and should not be hampered by the errors caused by their employer. Indeed, they are owed an apology for being put in this very difficult position.

In the case of the 93rd, it should absolutely be the case that Pearson should investigate how it became possible for this incident to occur. I cannot see how any further action could be taken to identify who shared the information.

No harm done today

In the grand scheme of things, this ought to be a minor sideline at the end of a news bulletin. Mistakes happen every year with exams; it’s inevitable. It’s a massive operation running on very tight timescales. There was no real harm done to students or teachers today, only to reputations at Pearson and the DfE.

The only reason that it has become such a big story is because it comes on the back of error, after error, after error. And all in a rush which was widely predicted to be catastrophic by the profession.

I feel for the civil servants who have been forced to rush through the rapid changes in unmanageable time frames. But I have no sympathy at all for ministers who have time and time again claimed that they “make no apology” for their actions. It’s time they recognised that this year’s assessment process has been a disgrace, and that they  – to use a word of Nick Gibb’s choice today – are the culprits.

Is it time to bring Ofsted in from the cold?

There’s been a clear rehabilitation process over many months now, and both Mike Cladingbowl and Sean Harford have done a great deal to try to win over the support of the profession. But there is still a lot of damage to repair. Ofsted is not yet a friend of the profession at large.

And arguably, nor should it be. An inspectorate should not be too cosy with those whom it inspects. But it must garner the trust and respect of the profession if it is to achieve its best in raising standards within it.

In the past it definitely failed. Too often schools and teachers found themselves doing things for Ofsted which did not help children to make progress, and sometimes even distracted teachers from that all-important role. Ofsted was seen too often as a punitive scrutiny of the minutiae, rather than a healthcheck on the quality of provision in schools.

Hopefully things are changing. Three years ago, Tom Sherrington told us all:

Overlooking his scandalous failure to use the subjunctive form – clearly not secondary ready – it seemed a fair point. More recently, it’s a point that has often been echoed by Sean Harford:

So is it time to let go of a difficult past and try to re-integrate Ofsted into our society?

Because the alternatives may be worse.

Nobody welcomes being inspected. It’s a necessarily high stakes event, and in some ways the fact that it’s carried out be real people can make it feel worse. But surely the evaluation of our work by real people has got to be better than evaluation by data?

We all know of plenty of stories of schools with good results using worrying practices, or schools with easy intakes failing to challenge their pupils. Equally, we can all see examples of schools struggling to crawl their way up league tables who are nevertheless achieving great things with the pupils in their care. An appropriate and well-managed inspection process can appreciate these variations, can discuss situations with schools, and can still offer the necessary challenge. True, it may not be perfect – indeed, for too long it hasn’t been.

But the alternatives may be worse.

Regional Schools Commissioners have an unmanageable number of schools to monitor and so have already shown themselves to be dependent on overly-simplistic numerical data with too little thought for the detail behind it. Performance tables can only ever show the narrowest of views of what a school achieves. And if we’re honest, as much as a mutual support structure would be a delight, we are very unlikely to see such a diminishment of central accountability.

It’s tempting to say “Better the devil you know…”, but if we think that Ofsted is the devil, that doesn’t give us a lot of metaphorical scope for the alternatives.

Be careful what you wish for.

Floor Standards: who’s pulling the strings?

As announcements go, it could be argued to be both momentous and unsurprising. When Nicky Morgan announced at the NAHT conference that the number of schools falling below the floor standard would rise by no more than 1%, it was the first time that it was officially confirmed that ministers will set the floor standards at a level that allows them to manipulate the numbers to suit their needs.

First, let me just clear up some confusion about how this can work. Many people are now concerned that test thresholds for the mythical 100 score will be manipulated. But there is no need for ministers to do this. The test thresholds can still be set using the performance descriptors set out in the test frameworks – a perfectly legitimate process which will probably also lead to test outcomes roughly equating to the old Level 4b standard.

The behind-the-scenes frippery can then happen with the progress measure. While the attainment outcomes are fixed against a common benchmark, the progress measure is based each year on the performance of children compared to others with similar starting points. For those unfamiliar with the calculations, the process is explain in this video:

You’ll notice towards the end of the video that it hasn’t been decided yet what score will count as “sufficient progress”. Saturday’s announcement confirms that it will be set, presumably on ministerial instruction, to ensure that approximately 680 schools end up below the floor standard.

It’s worth noting that this won’t necessarily be the 680 schools with the worst progress scores: any school that has a high-attaining intake and so makes poor progress but still meets the 65% attainment standard will be off the hook.

More detail about the floor and coasting standards is explained here:

The fact that the power is in the Secretary of State’s hands is no real surprise, but it’s a worry. It’s all very well us welcoming the news that numbers of schools below floor won’t soar this year, but what about next year? And the next?

If it is for the minister to decide how many schools are deemed to be failing each year, then can we look forward to an inexorable rise in success just before the next general election? And doubtless plummeting rates every time a new government needs to prove its mettle?

It seems there’s no pretence about educational reasons any more. Just ministerial need.