I’m reliably assured that mentioning Ofsted is bound to get a spike in visits to one’s blog page, so let’s see.
About a month ago, we were thrilled to receive that lunchtime phone call that meant the wait was finally over. As any school with a ‘Requires Improvement’ label (or worse) will know, although perhaps never quite ‘welcome’, there comes a point where the Ofsted call is desired, if only to end the waiting. We wanted to get rid of the label, and so this was our chance.
We’d been “due” for a few months, but knew that it could be as late as the summer, so in the end, the second week after Easter didn’t seem so bad (particularly as it left us with a long weekend in the aftermath).
So how did it go? Well, for those of you interested in grades, I am now the deputy headteacher of an officially GOOD school. It’s funny how that matters. Six weeks ago, I was just deputy of an unofficially good one.
But those of you still awaiting the call will be more interested in the process than the outcome, so let me start by saying that having spent the past 18 months building up my collection of “But Sean Harford says…” comments, I didn’t have to call upon it once. The team who visited us were exemplary in their execution of the process according to the new guidance and myth-busting in the handbook.
In the conversation on the day of the phone call, we covered practicalities, and provided some additional details to the lead inspector: timetables, a copy of our latest SEF (4 pages of brief notes – not War and Peace) and the like. And then we set about preparing. We had only just that week been collating teachers’ judgements of children’s current attainment into a new MIS, so it was a good opportunity for us to find out how it worked in practice!
We don’t keep reams of data, we don’t use “points of progress”, and we’ve gone to some length to avoid recreating levels. All for good reasons, but always aware that a ‘rogue’ team could find it hard to make snap judgements, and so make bad ones. The data we provided to the team was simple: proportions of each children in each year group who teachers considered were “on track” to meet, or exceed, end-of-Key-Stage expectations. We compared some key groups (gender, Pupil Premium, SEN) and that’s it. It could all fit on a piece of A4. So when it came to the inspection itself, there was a risk.
Day One
It may be a cliché to say it, but the inspection was definitely done with rather than to us. The first day included joint observations and feedback with the headteacher, as well as separate observations (we had a 3-person team). An inspector met with the SENCo, and the lead also met with English and Maths subject leaders (the former of which happens to be me!) and our EYFS leader.
The main question we were asked as subject leaders was entirely sensible and reasonable: what had we done to improve our subjects in the school? I think we both managed to answer the “why?” and “what impact?” in our responses, so further detail wasn’t sought there, but it was clear that impact was key.
Book Scrutiny
The afternoon of the first day was given over to book scrutiny. We provided books from across the ability range in the core subjects, as well as ‘theme’ books for each team. The scrutiny focused most closely on Years 2, 4 and 6, which fits both with the way we structure our classes and our curriculum and assessment approach. Alongside books, we provided print-outs for some children that showed our judgements on our internal tracking system. I’m not sure whether the focus was set out as clearly as this, but my perception of the scrutiny (with which both my headteacher and I were involved) was that the team were looking at:
- Was the work of an appropriate standard for the age of the children? (including content, presentation, etc.)
- Was there marking that was in line with the school’s policy? (one inspector described our marking – positively – as “no frills”, which I quite liked)
- Was there evidence that children were making progress at an appropriate rate for their starting points?
They asked for the feedback policy in advance, and made connection to it briefly, but the focus on marking was mainly on checking that it met what we said we did, and that where it was used, it helped lead to progress. Some pages in books were unmarked. Some comments were brief. Not all had direct responses – but there was evidence that feedback was supporting progression.
Being involved in the process meant that we could provide context (‘Yes, this piece does look amazing but was quite heavily structured; here’s the independent follow-up’; ‘Yes, there is a heavy focus on number, but that’s how our curriculum is deliberately structured’, etc.). But it also meant a lot of awkward watching and wondering – particularly when one inspector was looking closely at the books from my class!
The meeting at the end of the first day was a reasoned wander through the framework to identify where judgements were heading and what additional information might be needed. We were aware of one lower-attaining cohort, which was identified, so offered some further evidence from their peers to support our judgements. There was more teaching to be seen to complete the evidence needed for that. And there was one important question about assessment.
Assessment without levels
I had expected it. Assessment is so much more difficult for inspectors to keep on top of in the new world, and so I fully expected to have to explain things in more detail than in the past. But I was also slightly fearful of how it might be received. I needn’t have been this time. The question was perfectly sensible: our key metric is about children being “on track”, so how do we ensure that those who are not on-track (and not even close) are also making good progress?
That’s a good question; indeed it might even have been remiss not to have asked it! We were happily able to provide examples of books for specific children, along with our assessments recorded in our tracker to show exactly what they were able to do now that they couldn’t do at the end of last academic year. It gave a good opportunity to show how we focus classroom assessment on what children can and can’t do and adapt our teaching accordingly; far more important than the big picture figures.
Day Two
On the second day I observed a teacher alongside the lead inspector, and was again pleased by the experience. Like all lessons, not everything when perfectly to plan, but when I reported my thoughts afterwards, we had a sensible discussion about the intentions of the lesson and what had been achieved, recognising that the deviation from the initial plan was good and proper in the circumstance. There was no sense of inspectors trying to catch anyone out.
Many of the other activities were as you’d expect: conversations with children and listening to readers (neither of which we were involved in, but I presume they acquitted themselves well); meeting with a group of governors (which I also wasn’t involved in, but they seem to acquit themselves well too J); a conversation about SMSC and British Values (with a brief tour to look at examples of evidence around the school); watching assembly, etc.
Then, on the afternoon of day two we sat with the inspection team as they went through their deliberation about the final judgements. In some ways it’s both fascinating and torturous to be a witness in the process – but surely better than the alternative of not being!
As with any good outcome, we got the result we felt we were due (and deserved), and areas for feedback that aligned with what was already identified on our development plan for the forthcoming year. The feedback was constructive, formative, and didn’t attempt to solve problems that didn’t exist.
And then we went to the pub!