Category Archives: primary

A foolish consistency – the Primary School disease?

Let me start by saying that I think consistency is vital in schools. Pupils need to know that the behaviour policy will apply equally to everyone, and be applied equally by everyone. If a school has a uniform, then rules about it should be fairly and consistently applied to all. Children in Year 4 are entitled to just as good teaching as children in Year 6.

But there are limits. And it seems that too many primary headteachers cross them, to my mind. Not all, of course, but too many. On Twitter today a perfect example was shared by Rosie Watson (@Trundling17):

There is a headteacher – or senior leadership team – somewhere that thought it was an useful use of its time to come up with a list of 30 “must haves” that include how the classroom door must be signed, and that pegs must be labelled in week 1.

I wasn’t even that surprised when I saw it, because I’ve known far too many schools get caught up in such nonsense. Display policies can sometimes be the most read in a primary school, and I’ve known them include things like:

  • drapes must be used to soften the edges of displays
  • all work should be double-mounted
  • topic boards must be changed at least every 2 weeks
  • all classrooms must display a hundred square
  • all staples must point in the same direction

The point is that none of these things is necessarily a bad thing. Indeed, the one about staples appeals to my slightly frenzied mind. But to dictate it to a staff of highly-trained professionals? To expect teachers to spend their time and energy on such things rather than planning and preparing for learning strikes me as crazy.

What surprised me most about Rosie’s post, though, was not the content –  I fear that’s all too common – but the fact that some headteachers then tried to defend such approaches. The claims were that it was a useful reminder, or helpful for new teachers.

I have two issues with this. Firstly, the list is very clearly presented as a list of expectations to be met and judged against – not just helpful reminders. Secondly, these are not all good uses of someone’s time. If they were recommendations that I was free to ignore (and believe me, I would ignore a good number of them), then that’s fine, but that’s clearly not the case here.

If a school is insistent that its classroom doors have name labels in a certain style, then it should organise this administrative task, not simply demand it of teachers. Teachers’ time should be spent on things that directly impact teaching and learning, and precious few of these do.

Sadly, such “non-negotiables” seem to have become something of  a norm in school, with headteachers thinking that the way they ran their classrooms is now the way that others should follow suit. But it’s madness.

Headteachers are well aware of the strategic/operational divide between governors and heads, but they should consider a similar separation from the involvement in classrooms. Absolutely it is the place of the headteacher to lead on matters of curriculum and learning, and even to set the broad principles and expectations for the “learning environment” (oh, how I hate that term!), but that’s not the same as specifying the date by which your pegs are labelled.

The only other argument that was tentatively put forward was for schools which are in “a category”. Now here, I have some sympathy with heads who take on a school where things are a mess. Sometimes a clear list of expectations helps to brings things out of a pit – but that clearly isn’t the case here. If classrooms are untidy, it’s reasonable to expect that they be tidy; if disorganised cloakrooms are delaying learning, then it’s reasonable to expect something to be done about it. But no school was ever put in Special Measures because boards were backed with ‘inappropriate’ colours, or because  a Year 6 classroom didn’t have a carpet area.

And if  a school is in measures, then it probably shouldn’t be wasting its attention on how the classroom door is labelled! Both the leadership team and the teachers more widely should be focusing on the things that make the most difference to teaching and learning. Of course expectations should be raised, but that doesn’t need to be done through a foolish consistency.

Headteachers and Senior Leadership teams: you are busy enough – don’t sweat the small stuff, and certainly don’t make others sweat it for you!

(P.S. I’m a real rebel: I don’t label pegs at all!)


For an indication of some of the mad things that are dictated in primary schools, take a look at this Storify in response to my tweet:

Some thoughts on KS2 Progress

Caveats first: these conclusions, such as they are, are drawn from a small sample of a little over 50 schools. That sample of schools isn’t representative: indeed, it has slightly higher attainment than the national picture, both in terms of KS2 outcomes, and in KS1 starting points. However, with over 2000 pupils’ data, it shows some interesting initial patterns – particularly when comparing the three subject areas.

Firstly, on Maths – the least controversial of the three subjects. It seems that – in this sample – pupils who achieved Level 2c at KS1 had an approximately 40% chance of reaching the new expected standard (i.e. a scaled score of 100+). That leaps to around 66% for those achieving L2b at KS1 (i.e. just short of the national average)

mathslevels

The orange bar shows the average of this sample, which is slightly higher than the national average of 70%

It’s important to note, though, that progress measures will not be based on subject levels, but on the combined APS score at Key Stage 1. The graph for these comparisons follows a similar pattern, as you’d expect:

mathsaps

Where fewer than 10 pupils’ data was available for any given APS score, these have been omitted.

There is an interesting step here between pupils in this sample on APS of 13 (or less) who have a chance of 40% or less of reaching the expected standard, while those scoring 13.5 or more have a greater than 50% chance of achieving the standard. (The dip at 12.5 APS points relates to pupils who scored Level 2s in Maths and one English subject, but a level 1 in the other, highlighting the importance of good literacy for achievement in KS2 Maths)

For Reading, the graphs look broadly similar in shape

readinglevels

Blue bar shows average of this sample at 67%, which is slightly higher than national average of 66%

Interestingly here the level 2c children scorers still have only 40% chance of meeting the expected standard, but those achieving 2b have a lower chance than in maths of reaching the expected standard (58% compared to 66% for Maths).

When looking at the APS starting points, there is something of a plateau at the right-hand end of the graph. The numbers of pupils involved here are relatively few here (as few as 31 pupils in some columns). Interestingly, the dip at 18.5 APS points represents the smallest sample group shown, made up of pupils who scored 2a/2b in the two English subjects, but a Level 3 in Maths at Ks1. This will be of comfort to teachers who have been concerned about the negative effect of such patterns on progress measures: it seems likely that we will still be comparing like with like in this respect.

readingaps

It is in Writing that the differences become more notable – perhaps an artefact of the unusual use of Teacher Assessment to measure progress. Compared to just 40% of pupils attaining L2c in Reading or Maths achieving the new expected standard, some 50% of those in Writing managed to make the conversion, and this against a backdrop of teachers concerned that the expected standard was too high in English. Similarly, over 3/4 of those achieving Level 2b managed to reach the standard (cf 58% Reading, 66% Maths)

writinglevels

In contrast to the other subjects, attainment in this sample appears notably lower in Writing than the national average (at 70% compared to 74% nationally)

With the APS comparisons, there are again slight dips at certain APS points, including 18.5 and 19.5 points. In the latter case, this reflects the groups of pupils who achieved Level 3s in both Reading and Maths, but only a 2b in Writing at KS1, suggesting again that the progress measure does a good job of separating out different abilities, even using combined APS scores.

writingaps

Of course, this is all of interest (if you’re interested in such things), but the real progress measures will be based on the average score of each pupil with each KS1 APS score. I’d really like to collect some more data to try to get a more reliable estimate of those figures, so if you would be willing to contribute your school’s KS1 and KS2 data, please see my previous blog here.


Spread of data

Following a request in the comments, below, I’ve also attached a table showing the proportions of pupils achieving each scaled score for the two tests. This is now based on around 2800-2900 pupils, and again it’s important to note that this is not a representative sample.

proportions

A few words on the 65% floor standard

There’s been much discussion about this in the last few days, so I thought I’d summarise a few thoughts.

Firstly, many people seem to think that the government will be forced to review the use of a 65% floor standard in light of the fact that only 53% of pupils nationally met the combined requirements. In fact, I’d argue the opposite: the fact that so few schools exceed the attainment element of the floor standard is no bad thing. Indeed, I’d prefer it if no such attainment element existed.

There will be schools for whom reaching 65% combined Reading, Writing & Maths attainment did not require an inordinate amount of work – and won’t necessarily represent great progress. Why should those schools escape further scrutiny just because they had well-prepared intakes? Of course, there will be others who have met the standard through outstanding teaching and learning… but they will have great progress measures too. The 65% threshold is inherently unfair on those schools working with the most challenging intakes and has no good purpose.

That’s why I welcomed the new progress measures. Yes it’s technical, and yes it’s annoying that we won’t have it for another couple of months, but it is a fairer representation of how well a school has achieved in educating its pupils – regardless of their prior attainment.

That said, there will be schools fretting about their low combined Reading, Writing & Maths scores. I carried out a survey immediately after results were released, and so far 548 schools have responded, sharing their combined RWM scores. From that (entirely unscientific self-selecting) group, just 28% of schools had reached the 65% attainment threshold. And the spread of results is quite broad – including schools at both 0% and 100%.

The graph below shows the spread of results with each colour showing a band of 1/5th of schools in the survey. Half of schools fell between 44% and 66%.

Combined attainment

Click to see full-size version

As I said on the day the results were published – for a huge number of schools, the progress measure will become all important this year. And for that, we just have to wait.

Edit:

Since posting, a few people have quite rightly raised the issue of junior/middle schools, who have far less control over the KS1 judgements (and indeed in middle schools, don’t even have control over the whole Key Stage). There are significant issues here about the comparability of KS1 data between infant/first schools and through primary schools (although not necessarily with the obvious conclusions). I do think that it’s a real problem that needs addressing: but I don’t think that the attainment floor standard does anything to address it, so it’s a separate – albeit important – issue.

Am I overstretching it…?

What are people’s thoughts?

Everyone  wants to know about progress measures, but we won’t have the national data until September. We can’t work it out in advance… but is it worth trying to estimate?

I collected data on Tuesday night about the SATs results, and my sample was within 1 percentage point of the final national figures, which wasn’t bad. However, this would be a much more significant project.

To get anything close to an estimate of national progress measures, we would need a substantial number of schools to share their school’s data at pupil level. It would mean schools sharing their KS1 and scaled score results for every pupil – anonymised of course, but detailed school data all the same.

My thinking at this stage is that I’d initially only share any findings with the schools that were able to contribute. It would be a small sample, but it might give us a very rough idea. Very rough.

Would it be useful… and do people think they would be able to contribute?

Consistency in Teacher Assessment?

I posted a survey with 10 hypothetical – but not uncommon – situations in which writing might take place in a classroom, and asked teachers to say whether or not they are permitted under the current guidance for “indepencence” when it comes to statutory assessment. It seems that mostly, we can’t agree:

(Click for a slightly larger/clearer version)

1000replies

 

Collecting KS2 data on Teacher Assessment

Having had over 100 schools respond to my plea to share data from KS1 Scaled Score tests, the next big issue on the horizon is the submission of Teacher Assessment data at the end of June.

In the hope of providing some sort of indication of a wider picture, I am now asking schools with Year 6 cohorts to share their data for Teacher Assessment this year, as well as comparison data for 2015. As with all the previous data collections, it won’t be conclusive, or even slightly reliable… but it will be something other than the vacuum that currently exists.

So, if you have a Year 6 cohort, please do share your Teacher Assessment judgements via the survey below:

 

Some initial thoughts on KS1 data

ks1warning

I started collecting data from test scores and teacher assessment judgements earlier this week. So far, around 80 schools (3500+ pupils) have shared their KS1 test scaled scores, and nearly 60 (nearly 2500 pupils) have shared their Teacher Assessment judgements (in the main three bands). So, what does it show?

Scaled Score Data

Despite – or perhaps because of – the concerns about the difficulty of the reading tests, it is Reading which has the highest “pass rate”, with 65.5% of pupils achieving 100 or greater. (Similarly, the median rate for schools was just over 65%)

Maths was not far behind, with 64.2% of pupils achieving 100 or greater, although the median score was slightly higher for schools, again at 65%. The results for GPS were lower (at around 57%), but this was based on a far smaller sample of schools, as many did not use the tests.

The spread of results can be seen approximately, by the proportion of schools falling within each band in the table below (click to enlarge)

testscores

For example, just 2% of schools have more than 90% of children achieving a scaled score of 100 in Reading, while 43% of schools had between 60-69% of children scoring 100+

Notably, the range in Maths results is slightly broader than in Reading.

Teacher Assessment Judgements

The order of success in the subjects remains the same in the collection of Teacher Assessment judgements, with Reading having the highest proportion of pupils reaching the expected standard or greater, closely followed by Maths – and Writing trailing some way behind. However, perhaps the most surprising difference (or perhaps not) is the fact that the proportions are all approximately 10% higher than the scaled score data.

According to teachers’ own assessment judgements, some 74% of pupils are reaching the expected standard or above in Reading, 73% in Maths, and around 68% in Writing.

Similarly, the spread of teacher assessment judgements shows more schools achieving higher proportions of children at the expected level – and includes one or two small schools achieving 100% at expected or above.

tascores

There are notable shifts at the bottom end. For example, 16% of schools had fewer than half of children achieve 100+ in Maths, whereas only 4% of schools have fewer than half of their children achieving the expected standard when it comes to Teacher Assessment.

It’s important to note that the data is not from the same schools, so any such comparisons are very unlikely to be accurate, but it does raise some interesting questions.

Greater Depth

Have I said we’re dealing with a small sample, etc? Just checking.

But of that small sample the proportions of pupils being judged as “Working at Greater Depth within the Expected Standard” are

Reading: 17%
Maths:     16%
Writing:   11%

More Data

Obviously there are many flaws with collecting data in this way, but it is of some interest while we await the national data. If you have a Year 2 cohort, please do consider sharing your data anonymously via the two forms below:

Collect Key Stage 1 data

By popular request, I am collecting data about both test scores and teacher assessment judgements for Key Stage 1. The intention is to provide colleagues with some very approximate indicative information about the spread of results in other schools.

As with previous exercises like this, it is important to warn that there is no real validity to this data. It isn’t a random sample of schools, it won’t be representative, it is easily corrupted, mistakes will often slip through… etc., etc.
But in the absence of anything else, please do share your data.

scoresI am collecting data in two forms. Firstly, test score data using scaled scores. These can be entered into the spreadsheet as with previous sample test score data. Please enter only scaled scores for your children. The spreadsheet can be accessed (without a password, etc.) at this link:

Share Key Stage 1 Test Scaled Score Data

tadataI am also collecting schools’ data on Teacher Assessment Judgements. To simplify this, I am collecting only percentages of children working at each of the three main bands in Key Stage 1. I am not collecting P-scales, pre-Key Stage or other data. For this, I have put together a Google Form which can be completed here:

Share Key Stage 1 Teacher Assessment Data

Please do read the instructions carefully on each form (you’d be amazed at how many foolish errors have been submitted previously through not doing so!)

Our Ofsted experience

I’m reliably assured that mentioning Ofsted is bound to get a spike in visits to one’s blog page, so let’s see.

About a month ago, we were thrilled to receive that lunchtime phone call that meant the wait was finally over. As any school with a ‘Requires Improvement’ label (or worse) will know, although perhaps never quite ‘welcome’, there comes a point where the Ofsted call is desired, if only to end the waiting. We wanted to get rid of the label, and so this was our chance.

We’d been “due” for a few months, but knew that it could be as late as the summer, so in the end, the second week after Easter didn’t seem so bad (particularly as it left us with a long weekend in the aftermath).OFSTED_good_logo3

So how did it go? Well, for those of you interested in grades, I am now the deputy headteacher of an officially GOOD school. It’s funny how that matters. Six weeks ago, I was just deputy of an unofficially good one.

But those of you still awaiting the call will be more interested in the process than the outcome, so let me start by saying that having spent the past 18 months building up my collection of “But Sean Harford says…” comments, I didn’t have to call upon it once. The team who visited us were exemplary in their execution of the process according to the new guidance and myth-busting in the handbook.

In the conversation on the day of the phone call, we covered practicalities, and provided some additional details to the lead inspector: timetables, a copy of our latest SEF (4 pages of brief notes – not War and Peace) and the like. And then we set about preparing. We had only just that week been collating teachers’ judgements of children’s current attainment into a new MIS, so it was a good opportunity for us to find out how it worked in practice!

We don’t keep reams of data, we don’t use “points of progress”, and we’ve gone to some length to avoid recreating levels. All for good reasons, but always aware that a ‘rogue’ team could find it hard to make snap judgements, and so make bad ones. The data we provided to the team was simple: proportions of each children in each year group who teachers considered were “on track” to meet, or exceed, end-of-Key-Stage expectations. We compared some key groups (gender, Pupil Premium, SEN) and that’s it. It could all fit on a piece of A4. So when it came to the inspection itself, there was a risk.

Day One

It may be a cliché to say it, but the inspection was definitely done with rather than to us. The first day included joint observations and feedback with the headteacher, as well as separate observations (we had a 3-person team). An inspector met with the SENCo, and the lead also met with English and Maths subject leaders (the former of which happens to be me!) and our EYFS leader.

The main question we were asked as subject leaders was entirely sensible and reasonable: what had we done to improve our subjects in the school? I think we both managed to answer the “why?” and “what impact?” in our responses, so further detail wasn’t sought there, but it was clear that impact was key.

Book Scrutiny

The afternoon of the first day was given over to book scrutiny. We provided books from across the ability range in the core subjects, as well as ‘theme’ books for each team. The scrutiny focused most closely on Years 2, 4 and 6, which fits both with the way we structure our classes and our curriculum and assessment approach. Alongside books, we provided print-outs for some children that showed our judgements on our internal tracking system. I’m not sure whether the focus was set out as clearly as this, but my perception of the scrutiny (with which both my headteacher and I were involved) was that the team were looking at:

  • Was the work of an appropriate standard for the age of the children? (including content, presentation, etc.)
  • Was there marking that was in line with the school’s policy? (one inspector described our marking – positively – as “no frills”, which I quite liked)
  • Was there evidence that children were making progress at an appropriate rate for their starting points?

They asked for the feedback policy in advance, and made connection to it briefly, but the focus on marking was mainly on checking that it met what we said we did, and that where it was used, it helped lead to progress. Some pages in books were unmarked. Some comments were brief. Not all had direct responses – but there was evidence that feedback was supporting progression.

Being involved in the process meant that we could provide context (‘Yes, this piece does look amazing but was quite heavily structured; here’s the independent follow-up’; ‘Yes, there is a heavy focus on number, but that’s how our curriculum is deliberately structured’, etc.). But it also meant a lot of awkward watching and wondering  – particularly when one inspector was looking closely at the books from my class!

The meeting at the end of the first day was a reasoned wander through the framework to identify where judgements were heading and what additional information might be needed. We were aware of one lower-attaining cohort, which was identified, so offered some further evidence from their peers to support our judgements. There was more teaching to be seen to complete the evidence needed for that. And there was one important question about assessment.

Assessment without levels

I had expected it. Assessment is so much more difficult for inspectors to keep on top of in the new world, and so I fully expected to have to explain things in more detail than in the past. But I was also slightly fearful of how it might be received. I needn’t have been this time. The question was perfectly sensible: our key metric is about children being “on track”, so how do we ensure that those who are not on-track (and not even close) are also making good progress?

That’s a good question; indeed it might even have been remiss not to have asked it! We were happily able to provide examples of books for specific children, along with our assessments recorded in our tracker to show exactly what they were able to do now that they couldn’t do at the end of last academic year. It gave a good opportunity to show how we focus classroom assessment on what children can and can’t do and adapt our teaching accordingly; far more important than the big picture figures.

Day Two

On the second day I observed a teacher alongside the lead inspector, and was again pleased by the experience. Like all lessons, not everything when perfectly to plan, but when I reported my thoughts afterwards, we had a sensible discussion about the intentions of the lesson and what had been achieved, recognising that the deviation from the initial plan was good and proper in the circumstance. There was no sense of inspectors trying to catch anyone out.

Many of the other activities were as you’d expect: conversations with children and listening to readers (neither of which we were involved in, but I presume they acquitted themselves well); meeting with a group of governors (which I also wasn’t involved in, but they seem to acquit themselves well too J); a conversation about SMSC and British Values (with a brief tour to look at examples of evidence around the school); watching assembly, etc.

Then, on the afternoon of day two we sat with the inspection team as they went through their deliberation about the final judgements. In some ways it’s both fascinating and torturous to be a witness in the process – but surely better than the alternative of not being!

As with any good outcome, we got the result we felt we were due (and deserved), and areas for feedback that aligned with what was already identified on our development plan for the forthcoming year. The feedback was constructive, formative, and didn’t attempt to solve problems that didn’t exist.

And then we went to the pub!

Year 6 Sample Test data – another update

As people continue to share their data via the spreadsheets, I thought it was about time to do one final update of summarised data based on the most recent shared results.

As we’re so close to the tests now, I have looked only at the data collected from tests taken since February half-term. That keeps it as a good sample size of 3500-4000 pupils in each case, while discounting the much older results. I don’t have enough data yet from Summer 1 alone to make much of it.

Key Stage 2 DfE Sample tests

Subject Mean Average Score Median Score Interquartile range
Reading 30 marks 31 marks 24 – 37 marks
Grammar, Punctuation & Spelling 36 marks 37 marks 28 – 46 marks
Mathematics 64 marks 65 marks 46 – 85 marks

The shifts are not that significant in reading and GPS (up by a mark each for the averages), but the maths average seems to have risen quite a bit (it was 59 marks just a month ago)

If you’ve recently used the tests in your school, please do add your data to the collection by entering into the spreadsheet here: Year 6 Sample Test data collection

You can access a spreadsheet to present your own school’s results in a comparison table and graphs alongside all of the national data collected so far (since January). Access the comparison spreadsheet here: Key Stage 2 Comparisons