Monthly Archives: October 2014

The trouble with Ofsted and marking…

Alongside other news on education research in the press today, comes an article in the TES about marking. According to the TES blog, in it Alex Quigley (@HuntingEnglish) argues that we cannot wholly blame Ofsted for the current demands on workload of marking and feedback in schools. I’ll confess that I’ve not yet read the article in the paper, so I don’t intend to challenge this directly, but I do want to explain why I think Ofsted continues to be a driver of workload in this area, and perhaps how this reflects some of its deeper flaws.

Firstly, there can be no doubt that increasingly Ofsted reports have identified marking or feedback as an area for improvement in their recommendations. In fact, it’s quite hard to track down an Ofsted report which doesn’t recommend an improvement in marking and/or feedback, harder still to find one which praises quality of marking. Even among Outstanding school inspections, feedback on feedback is mixed at best. Of the 18 schools currently listed on Watchsted as having a recent Outstanding grade (including, therefore, Outstanding Teaching), just four list marking/feedback as a strength, with a fifth indicating that it is “not Outstanding”.

The limitations of Watchsted meant I could only look at the 10 most recent reports for other categories but in every case, all 10 examples showed that feedback was a recommendation, rather than a strength. It seems that even where schools are graded as Good or Outstanding, it’s difficult to get inspectors to praise marking.

One Outstanding school is hit with both praise and criticism on the matter:

Pupils are given clear guidance on how to improve their work or are set additional challenges in literacy and mathematics. This high quality feedback is not always evident in other subjects.

Ofsted report for Acresfield Community Primary School, Chester

The school is challenged to raise the standards of marking in other subjects to meet the high quality in the core areas.

Another school’s report, which praises the quality of marking in the recommendations, also contains a sting in its tail:

Marking, although not outstanding, promotes an increasingly consistent, and improving high-quality dialogue between teachers and pupils.

Ofsted report for Waddington All Saints Primary School, Lincoln

Later in the report comes that recommendation that the school “Accelerate pupils’ progress even more by ensuring that the marking of pupils’ work consistently promotes even higher quality dialogue between teachers and pupils in all classes.” And this is not an old report; the inspection took place this month!

Is it perhaps the case that marking and feedback has become the ‘go-to’ recommendation for inspectors when needing to justify an outcome, or to find a recommendation to make. Can it really be the case that only 4 schools of the last 200 primary and secondary schools inspected have sufficiently high quality marking and feedback to note it as a strength? Or that it is near impossible to find a school that doesn’t need to significantly improve its marking & feedback?

Here lies the problem with the recent clarification document from Ofsted: it’s all very well saying that inspectors won’t expect to see “unnecessary or extensive written dialogue”, but how does that sit with the recommendation that a school needs to promote “an increasingly consistent, and improving high-quality dialogue between teachers and pupils”. Where do we draw the line between the two?

The reality here lies perhaps somewhere deeper. Are we asking too much of our Ofsted teams? It’s very easy to spot that a school is not achieving results in line with predictions or expectations; its surely much harder to diagnose the causes and recommend a cure.

My own most recent experience of Ofsted was an inspection in which I recognised the outcomes (i.e. area grades) as accurate, but the recommendations as way off the mark. As has become commonplace, alongside our overall grade of Good, marking and feedback was raised as an area to improve, despite the fact that I – and colleagues – felt that other things should have been more pressing. Nevertheless, the nature of system demands that feedback then became a focus of the school, perhaps at the cost of other more important matters.

The problem is exacerbated for schools which are in need of improvement. The race to complete a report in 2 days doesn’t allow thorough diagnosis of the needs of the school, and even then the needs are seemingly reduced to a few bullet points. Any nuance or detail is lost, and it is left to a completely separate HMI to review progress against the targets set. And what better way to show an HMI that marking is improving than to ramp up the quantity?

In discussing this today, Bill Lord (@joga5) quite rightly pointed out that the EEF Toolkit emphasises feedback as one of the key areas to support progress (particularly in relation to Pupil Premium funding, one presumes), and yet even their page quite clearly states on the matter that:

Research suggests that [feedback] should be specific, accurate and clear; encourage and support further effort and be given sparingly so that it is meaningful; provide specific guidance on how to improve and not just tell students when they are wrong; and be supported with effective professional development for teachers.

In his article, Alex Quigley mentions “stories of teachers being forced to undertake weekly marking, regardless of the stage of learning or the usefulness of feedback“. In primary schools it is now common to expect that books are marked daily, and in many cases feedback given as often. The focus here is clearly on the expectations of Ofsted, rather than on the value of the process.

Alex Quigley might be right: we can’t blame Ofsted entirely for this; school leaders do need to take some responsibility and be brave enough to stand up to inspectors who get this wrong. But at the moment, the power is all rather on one side and the consequences fall rather heavily on the other.

It’s a brave school leader who sticks his head above the parapet.


Whose History curriculum is it anyway?

After months of secrecy – for no clear reason – at the DfE, I got surprising response to my FOI request this month. I had expected to be told that the names of the people whose advice was sought about the re-drafting of the curriculum would be withheld, so it was quite a shock to see them set out before me.

Since the list was published, others have taken a great interest in it, and our enquiries are now greatly supported by the efforts of Marina Robb (@MarinaRobb) who has taken the time to try to find out some brief details about each of the panel members. The work below is all hers (save for the formatting):

1. Scott Baker:  Head of History at the Robert Clack School in Dagenham and History rep Academic Steering Group of The Prince’s Teaching Institute(Secondary Education)
2. Lord Bew: Professor of Irish Politics (Higher Education) [Politics/Stance:NeoCon Henry Jackson Society]
3. Professor Jeremy Black:  Professor of History at the University of Exeter (Higher Education) [Politics/Stance: Conservative]
4. Professor Arthur Burns: Professor of History at KCL and  Vice President of Royal Historical Society – specialist in the History of the Church of England (Higher Education/History Advocacy)
5. Jamie Byrom: Schools History Project (Schools Consultant/History Advocacy)[Politics/Stance: Thematic Enquiry Based Learning]
6. Daisy Christoudolou: Briefly an English Teacher now an Education consultant (Secondary Background: English) [Politics/Stance: Traditional Knowledge Curriculum]
7. Christine Counsell: Senior Lecturer PGCE History Cambridge, former Secondary School teacher (Higher Education/Secondary Education)
8. Jackie Eales: Professor of early modern history at Canterbury Christ Church University and president of the Historical Association (Higher Education/History Advocacy)
9. Rebecca Fraser (?): Author “A People’s History of Britain” (History Author/Writer) [Politics/Stance: Conservative]
10. Dr. David Green (?): Head of Civitas [Politics/Stance: Right of Centre)
11. Elizabeth Hutchinson: Former head of history, Parkstone Grammar School, Poole Contracted by DofE to draw up GCSE History subject content (Secondary Background)
12. Matthew Inniss: Subject Leader for History and an Economics Teacher at Paddington Academy in Westminster. (Secondary Education)
13. Dr Seán Lang: Senior Lecturer in History, specialising in the history of the British Empire, Chair of the Better History Group (Higher Education/History Advocacy) [Politics/Stance: Traditional Knowledge Curriculum]
14: Jennifer Livesey (?): Primary Teach First (Primary Education)
15: Chris McGovern: Campaign for Real Education, former History teacher, Prep School Head (Secondary Background/Education Advocacy)  [Politics/Stance: Traditional Knowledge Curriculum]
16: Dr Michael Maddison: Ofsted Lead Inspector for History (Schools Consultant/History Advocacy)
17. Andrew Payne:  Head of Education & Outreach at The National Archives
18: Robert Peal: Former Secondary School History Teacher (2 years), then Research Fellow at right=of-centre Civitas (Secondary Background/Education Consultant) [Politics/Stance: Traditional Knowledge Curriculum]
19: Katherine Rowley Conwy: Head of Sixth Form Highbury Fields School (Secondary Education) [Politics/Stance: Seems to be British History focus]
20: Rebecca Sullivan: Chief Executive at The Historical Association previously Senior Humanities Publisher at Pearson Education (History Advocacy/Education Consultancy)
21: Professor Robert Tombs: History fellow at St John’s College, Cambridge and Politeia (Higher Education/Political Think Tank) [Politics/Stance: Right of Centre]
22. Jonny Walker: Teach First Primary
23: Dr Nick Winterbotham: Chairman, Group for Education in Museums (GEM) and runs Winterbotham and Associates Leadership advice, marketing and entrepreneurship, etc. (Education
Consultant) (Education Consultant)

Teaching today: not enough evidence; too much evidencing.

The Department for Education are consulting at the moment on the causes of teacher workload, presumably with a view to implementing some sort of effort to reduce it. While I want to laud the department for its efforts, I also feel that they’ll be largely fruitless. Not least because very rarely is the department itself responsible for matters of workload.

Of course people will point out that changes to the curriculum and examination boards come with the own workload, and I don’t disagree. But I also can’t see any value in arguing that these things should never change. And true, perhaps the pace and frequency of change is at fault, and so well worth reporting to the DfE.

However, as far as I can see, the real drivers of workload are not policy decisions from the department, but rather the practices of the inspectorate, and particularly its determination to see evidence.

There has been plenty of talk over the last couple of years on evidence in education, from Ben Goldacre to Tom Bennett’s ResearchEd. New approaches to evidence should be welcomed in our profession. But what I’d really like to see is a new approach to evidencing. That is, I’d like to see a change to the current situation where the action of providing evidence for actions is valued more highly than the impact of such actions. The act of evidencing work has become more highly rated than the evidence itself.

Across the country, schools implement policies to protect themselves from the wrath of Ofsted by demonstrating actions. Differentiation is not just based on the needs of the class, but on the need for it to be seen by observers. It is no longer enough for a teacher to adapt their teaching to the needs of pupils; rather it must be evidenced using 3 or 5 differentiated tasks, or sections on a lesson plan.

Feedback has ceased to be about “information given to the learner and/or the teacher about the learner’s performance”[1], but instead has become about evidencing feedback through marking dialogue and endless volumes of red pen. Verbal feedback might be most effective, but is only permitted if evidenced by a stamp or annotation (or increasingly, both!)

It’s not enough to manage behaviour effectively and deal with misbehaviour appropriately when it arises; the process must now be evidenced for inspectors to examine should they wish.

Progress is no longer a matter of ensuring that children achieve the most from their learning, but rather of evidencing that they have completed more of the long march through the sub-levels. The new consultation on performance descriptors serves only to show that all the talk of school-led assessment is soon replaced by the need for evidenced outcomes.

Of course, whether or not any of these things are intended by the department is beside the point. All the time Ofsted are criticizing schools for failing to evidence things, or praising those schools who excel at producing evidence, other school leaders will feel compelled to continue to demand that work be evidenced.

Regardless of what the educational evidence says.

[1] This is the explanation of ‘feedback’ at the very useful EEF Toolkit page, which also states that feedback should be given “sparingly so that it is meaningful”. Not sure how that fits with Ofsted’s current approach!

The art of simplification

When the DfE announced the removal of levels as a system of national assessment, they cited the issue that they were “complicated and difficult to understand, especially for parents”.

Historically, at the end of KS2 parents have received a report indicating the level at which a child is working in the core subjects. In recent years this has become slightly more complex because of the changes following the Bew Review, but by and large parents are given a collection of single-digit scores in which 4 represents the expected level: higher numbers represent higher attainment; lower number represent lower attainment.

So far so simple. A table of results might look something like this:


So in this case, the child was clearly stronger than average in Reading, Writing and Maths, weaker and the grammar aspects, and in line with expectations in Science.

But this was “difficult to understand, especially for parents” so now the DfE proposes a new system. Instead of working to attain Level 4 in all areas, students will now be expected to score 100 points on a scaled score. Or to meet a national standard. Or in Writing to achieve one of 5 benchmarks. So the new charts could become considerably more complicated. Perhaps as nonsensical as this:


How a parent is meant to make any sense of these varied systems if they were unable to comprehend the digits 3, 4 and 5 is anyone’s guess! In this case, the child’s excellence in Reading is reduced to a number with no obvious sense of scale and a simple ‘Yes’ to indicate that they have met the minimum national standard, despite clearly achieving well in excess of this on the tests. Yet the Writing, which is only described based on a performance descriptor, suggests that it is a strength, when in fact it might be nowhere near as strong as Reading.

The old levels made little attempt at nuance. The proposed system attempts to imply it and thereby destroys it!

The consultation remains open on the teacher assessment gradings, for what it’s worth.

This post was inspired by a comment made on twitter by @RevErasmus

When is a level not a level?

And so the performance descriptors loom large!

The DfE has launched its consultation into the new performance descriptors for statutory teacher assessment at the end of Key Stages 1 and 2. And as feared, they are essentially just levels re-packaged. For all the talk about freedom to assess properly, the power of linking assessment directly to the curriculum, and other such bluster, we’ve ended up with simply a re-worked level system for KS2 Writing that includes an extra ‘level’ for us to “measure” against. And a whole host of levels to work with at KS1

Parents didn’t understand levels, apparently. But now we’ve added an extra one they somehow will?

Levels were too vague to be useful for assessment. But now they’ve been re-written with APP-type labels they’ve magically become better?

APP was an unwieldy paperwork nightmare foist on us by an all-controlling government, they said. And so now we have pages of descriptors instead.

The future is coming into view already: cue publishers and other organisations writing equivalent descriptors for each year group, or intervening phases, and across the subjects, and lo and behold we replace the complicated and confusing systems of levels 1 to 6 with a system of vague threshold titles, lengthy descriptions… and before you know it, a whole host of sub-grades to help “show progress” every fourth day.

After all, surely a mixed system of performance descriptors, scaled scores and whatever other categories they come up with will be confusing – it’s all but inevitable that we’ll end up with the lowest common denominator in too many cases.

Levels are dead. Long live levels!

Curriculum overview for Early Years

eyfsjigsawthumbnailA couple of weeks ago I reported how, in discussions with the DfE, it had emerged that schools are required to publish their curriculum online for Reception classes as well as statutory-age classes. Since then someone got in touch to ask if they could adapt my curriculum jigsaws template for the Early Years.

I’ve taken a look at the EYFS framework (in which I am, by no means, an expert) and have attempted to create an equivalent page for the Early Years phase. As it is relatively stable, I have also included an editable word version.

If it’s of use to you, please use it! If you do choose to use the editable version, please do leave on the footer that credits the work on the template to me/

Curriculum Overview for EYFS (PDF download)

Curriculum Overview for EYFS (editable Word document)

What that Ofsted clarification should have said!

There was much to welcome in the recent note of clarification from Ofsted, and may it be publicised widely. However, to my mind there is still much that wasn’t said that ought to be. Of course, whether the chiefs at Ofsted agree with me remains to be seen.

Here’s what I’d have liked to have seen:


  • Ofsted should not expect to see lessons differentiated a set number of ways. Inspectors are interested in whether or not the work is an appropriate challenge for all pupils; the number of groups within this will depend on the circumstances. Not all lessons require differentiation.
  • Ofsted should not expect to see children writing learning objectives. While it is often important that objectives are shared with children, nothing is added by forcing the copying of them at length.
  • Ofsted should not expect to see written evidence of all lessons in exercise books. Some lessons do not require written evidence; writing in learning objectives, or explanations of what was undertaken in a lesson is an unnecessary waste of time.

Marking & Target-setting

  • Ofsted should not expect to see evidence of marking of every piece of work. It is for schools to decide appropriate policies for marking and feedback, and the focus should be on impact, rather than evidence for outside bodies.
  • Ofsted should not expect to see written marking in the books of children for whom reading is at a very early stage. If it cannot directly impact on a child’s learning then it is time and effort poorly-spent.
  • Ofsted should not expect children to be able to recite their targets in every subject. While it is important that children know how to improve their work, there are many ways in which this can be achieved.
  • Ofsted should not expect children to know their ‘level’ in any subject.
  • Ofsted should not expect schools to update tracking data every six weeks (or other fixed interval). Tracking is not the same as assessment, and while on-going assessment is essential for effective teaching, tracking is only an administrative tool for leaders. Tracking should be as frequent as needed for effective leadership of the school, and no more frequent.


  • Ofsted should not expect to see identical consistency across all classrooms in a school. Departments and year teams quite rightly adapt school approaches to suit the needs of their subjects or pupils.
  • Ofsted should not expect pupils in measured groups to be identified in any way in the classroom. Students eligible for the Pupil Premium, or in local authority care should not be differentiated publicly.

It’s not an unreasonable list, is it? I will, naturally, waive all copyright demands should Ofsted wish to copy my ideas and add them to their document!

Tracking: a need-to-know operation

As schools we’ve become experts in tracking. A whole industry has grown up around it, and you can buy software to create a graph of just about anything. But as I’ve said many times before, there is a big difference between tracking and assessment. Assessment is at the very core of what schools should be about. Tracking, on the other hand, is simply a tool for keeping an eye on things.

A discussion on Twitter tonight – part of the #ukgovchat session – made me particularly aware of our addiction to tracking. Governors were, quite rightly, wondering what they needed to know about how schools are moving to new assessment systems, and whether they ought to insist on keeping levels for an overlapping period.

My contribution was to suggest that governors start from the point of what they actually need to know. Schools now produce far more data than any individual or group of governors could hope to get a grasp of. But the point is, that’s not their role. And here’s the thing – we can track all manner of things, but perhaps we need to take tracking back to a simple system that provides only what we need to know.

So who needs to know what?


For a typical governing body, there are only a few bits of useful information that can reasonably be monitored. Obviously the end-of-key-stage results are key. RaiseOnline does its thing here and provides more than enough detail for anybody. As for other year groups – the needs are limited. Governors need to have a strategic overview, so for the most part it should be sufficient for them to know what proportion of children are on-track to achieve expected and above-expected outcomes at the end of the Key Stage. This might include break-downs by groups (Pupil Premium, sex, etc.) but the big picture figures are limited to only two or three categories.

School leaders

For the most part, the same data as governors will be sufficient for school leaders. Where more detail is required – perhaps because a particular department, teacher, or group of pupils appears to be under-performing – then further detail may be required, but this should be provided by the teachers with responsibility for those children. For example, if leaders need to know which students are particularly being targeted for accelerated progress, then this should come from the teachers who know them, not from scanning lists of sub-levels. It is these practices that lead to the nonsense of “over-achieving” students then being targeted for further accelerated progress, rather than careful focus on the most needed/worthwhile groups.


Teachers have almost no need for tracking. Their focus should be on assessment – relating progress directly to the learning and curriculum, not on broad categories and sub-levels. Inevitably there will be occasions where such assessment is used to inform tracking, but usually at this stage it loses any nuance and detail that would be useful to a classteacher.


Arguably the most important recipients of assessment/tracking information, making it all the more shocking that I forgot these ‘stakeholders’ initially (see comments). Students have a keen interest in their progress, and should be supported to understand their attainment and targets. However, as I have said many times before, sub-levels did not achieve that. Students have a right to clarity about what they are doing well, and specific areas for improvement; that comes back to high quality assessment in the classroom. They may also be interested in their tracking data – knowing whether or not they’re on-track to meet expected (or higher) levels, but these should be secondary to the specifics of assessment.


Ofsted don’t need to know anything of in-school data. They get plenty of detail in Raise, and merely need to satisfy themselves that the school leaders have a good grip on the progress of students in other year groups to allow them to ensure that all students make appropriate progress. As the organisation itself told us this week: inspectors should not expect performance- and pupil-tracking data to be
presented in a particular format.

Tracking ≠ Assessment

None of this implies that no further detail is required at all. An essential part of the teacher’s job is to ensure that children are making progress and that teaching is targeted to close gaps and raise attainment for all. But none of that is linked to tracking, and we’d all do well to remember that!

What purpose marking?


Marking – for the love of it?

I had a conversation with Mark Gilbranch (@mgilbranch) today about book scrutinies, and particularly considering the approach to monitoring marking in school. It brought to the fore, in my mind, some of the many issues with marking policies in schools, and particularly the problems with the ways in which they are both implemented and monitored – including by Ofsted!

When I commented over the weekend that I’d happily always plan and never mark, several people commented that they thought marking was an integral part of planning. I’d disagree. I’m not arguing that marking is pointless, but rather that it is not the act of marking work that helps me to know where to go next; it is merely the act of reviewing it. The actual marking should be creating dialogue with students, to allow them to make next steps without my direct presence.

And here lies the rub. Marking isn’t for the teacher, ever. And so we confuse ‘marking’ and ‘feedback’ at a cost. Some of the most important feedback that comes from reviewing work is not in the written comments, or even in the verbal feedback given to students. The most significant feedback from reviewing work should be to the teacher, indicating to him/her where the teaching ought to go next.

Critically, marking policies often overlook this vital element – even when marking policies are renamed feedback policies. The focus is always on the approaches for giving written comments (or verbal) to students. And while this is undoubtedly an important part of the work of feedback, it isn’t the most important.

Many policies now emphasise the need to give children opportunities to follow-up on marking comments – and rightly so. But that isn’t always the most important part of the process either. Sometimes a piece of work shows that more drastic intervention is required, either individually or as part of a class. Sometimes the work is completed to such a high standard that a new challenge needs to be offered than can only be delivered in person, or as part of a group in the follow-up lesson. Sometimes the feedback a teacher garners from a selection of books is entirely unrelated to the learning objective of that lesson, but highlights an unconnected issue. In all these cases, a comment – in whatever colour pen the policy dictates – won’t achieve what is really needed. In these cases the feedback to the teacher, providing indications of where to take the teaching next will be far more important than any cursory work a child could do in response to the mighty red pen.

But if policies don’t recognise this – and many don’t – then how much energy will be expended by both teachers and students on evidencing marking and responding to marking in order to demonstrate that the policy is being implemented, at the cost of real learning opportunities in the next lesson.

Re-naming marking policies as feedback policies isn’t enough. We need to be explicit in the aims of our marking & feedback policies (and, yes, they should have aims!) that feedback is provided both to teachers and students through the reviewing of work completed, and that the professional judgement of the teacher should guide the response, which may be individual comments, may be group interventions, or may be whole-class teaching to tackle a wider misconception. Not all of these can be evidenced in red pen and follow-ups, and nor should they be.

It means that when scrutinising marking – as Mark Gilbranch was talking about – we need to be explicit about what is being looked for. In some cases it may be appropriate highlighting; in others it will be specific red pen comments; in others it will be action taken by students. But most importantly, we should be looking for evidence that an intervention by the teacher, based on the review of the work completed, has had a formative and positive impact on learning. And that might not be so easy to spot – especially to an Ofsted inspector taking a quick flick through the books. We need to be clear in our policies about our approaches, and ready to demonstrate their effectiveness to all comers.

Marking is an essential part of the job… but it shouldn’t be so essential as to get in the way of teaching and learning.