Let’s Stop the Blame Game

Transition is a bit of a bugbear of mine. It is frequently problematic, recognised as such, and yet rarely tackled. Every few weeks on Twitter someone somewhere will post some claim or reference to the inaccuracy of Key Stage 2 test results. I – and often a few others – will step in and point out that there are possible reasons for discrepancies other than the implied cheating or similar, and then someone else will claim that National Curriculum levels are the problem, and that they just aren’t the same in primary and secondary schools.

This is nonsense. Levels are broad, they have flaws, and are only roughly estimated by test results, but they are unquestionably the same.

That said, it is clear that there is an issue with discrepancies between levels that are awarded at the end of Key Stage 2, and those which are perceived by secondary teachers in the early weeks of Year 7. It’s quite possible, of course, that in some cases that is down to either error or more scurrilous method in the last weeks of primary education, but that’s quite a conclusion to leap to. So what other explanation?

I spent the last 7 years teaching Year 7 in a middle school, where the primary teachers shared the staffroom, corridors and playground duties with the KS3 staff. There is no room there for cheating which will be picked up in a neighbouring classroom. Yet still each Autumn I would see work in books which did not always reflect the National Curriculum level on the spreadsheets – and the reasons were varied. For example,

  1. George is a case in point. He came to me with a Level 5 in Writing, and yet I saw no evidence of it in his written work. Pieces were often brief, poorly-constructed and lacked detail. The obvious conclusion would have been to presume that he had never been working at Level 5. Except I knew the person who had awarded it, and had every faith in their professional judgement.
    The reality was that George was coasting. He’d been driven to succeed in the KS2 assessments (he’s happy to pull the rabbit out of the hat on cue) and had taken his foot off the pedal. Of course, a conversation with his Year 6 teacher soon put me straight, and in turn we were able to put him back on track. I was able to show him his KS2 Writing test paper and explain that I expected similar quality from him from then on.
  2. In my first year at a new school, I had a girl in my top maths set who seemed to struggle. The sets had been organised by raw test score and she had scored well in KS2, yet was struggling to keep up with others in her set. A quick check of the teacher assessment data showed that her secure Level 5 on the test did not match the teacher assessment of high 4. She’d got lucky in May, and was paying for it now by being in the wrong set. She moved, made good progress, and was working comfortably at the top of level 5 by the time she moved to Y8. Her test score had been “wrong”; the teacher assessment was right.
  3. Harry came to me marked as a L3 reader, yet he seemed much stronger. By the second week of term, I was already querying his low level. In fact, it was a simple case of an error when moving rows about on Excel. The error was entirely mine, but the inaccuracy could easily have been blamed elsewhere, and then ignored.

But these are anomalous examples, one might argue. The reality is that kids are boosted to reach the levels of the tests and then fall back because they do no work after them until September. And there may be some truth in that. The problem, though, comes when we presume that forgotten knowledge is the same as un-met knowledge. There is a great risk that taking benchmarking assessments at the start of Year 7 as ‘gospel’ runs the risk of setting too low a starting point for many children.

Aside from the massive social and cultural impacts of starting secondary school (and these are surely undeniable), the simple use of a test in the early weeks of a new term, let alone a new school, is likely to be an unreliable marker of future success. Sure, it will help to bolster the apparent progress figures, but that isn’t the same as achieving progress.

If I were asked to sit an A-level French exam this week, without warning, I would probably struggle. I may not do better than the average newcomer to A-level French. But that doesn’t mean that my two years of learning have been eradicated. Some it will be lost and will need re-teaching, but a great deal more will come back with prompting. Just because the knowledge isn ‘t there at the starting gun, doesn’t mean it all needs to be taught from scratch.

September assessments are simply not that accurate, it seems. I worked closely with the data manager at one of our receiving high schools and looked at their teacher assessments for September and February of the first academic year they had our students. In the September assessments more than half were assessed at the same level, or above, as our final assessment – but nearly half were assessed at a sub-level or more below. The variation was on average in excess of a whole level (6.2 APS points). It made it seem that our data could have been plucked at random. One child – Sally – had been awarded 6C in Reading by us, and was levelled at 3B by the high school; Similarly, Tom, whom I’d awarded 4C, had been assessed at 5A. The discrepancies were bountiful.

But the February data told a very different story. After 6 months in the school, the average discrepancy between our June assessments and their February assessments was just 3 APS points. Only two children had been assessed as working at a lower level than their final level with us (one of whom, oddly, had been given identical levels in June and September!). The remainder had made an average of just over 1 sub-level of progress – much as you might expect part way through a successful year. The two extreme cases I mentioned above had been clear outliers, and were soon back to within a sub-level of the June assessments we had made.

The September data had – to me, who knew the children well – seemed erratic and confusing. To the secondary teachers who had known the children no more than an hour or two, and were forced to base their assessments on single examples of work, they were virtually meaningless. The February results looked recognisable to me, and more accurate to them.

I could, then, try to argue that it is secondary teachers who are wilfully under-assessing in September to make their results look good. The reality, I rather suspect, is very different. Teachers were forced to assess based on limited knowledge, and so, inevitably, accuracy suffered. Yet it is quite likely that some of those same teachers will have made comments about primary school data reliability on seeing their September data.

The implications could, of course, be significant. Take Sally, the child who had been awarded 6C by us, but mid level 3 by the high school. Had they taken that baseline as accurate, regardless of our information, they might have expected her to struggle at GCSE. Their February assessment of 6B showed that this clearly wasn’t to be the case, but the risks were there all the same. Setting decisions based on rushed assessments in September could have been catastrophic for her. But equally, there might have been many more students with smaller errors, but mis-assessed all the same, whose progress was then held back by that initial low expectation.

It’s interesting to note that whenever the complaints arise on Twitter and elsewhere, it is always the children who seem to have “over-performed” at KS2 who are mentioned; how many like Tom get simply forgotten because the data seems more favourable? The blame game might be less dangerous if we always presumed that the higher result were more accurate, but I suspect that that is rarely the case.

Of course, what makes the system different within my own school is that we talk to each other across the key stage boundary. If I see an anomalous result, or a child who appears not to be working at the expected level, then I would think it only normal to speak to the previous class teacher. If only the same happened more frequently between schools.

The challenges of transition are manifold. But let’s not just presume that the difficulties are all caused by those on the other side of the divide; reality is not quite like that.

Tagged: , , , ,

8 thoughts on “Let’s Stop the Blame Game

  1. teachingbattleground 1 November 2013 at 6:37 pm Reply

    Reblogged this on The Echo Chamber.

  2. rgreenthumb 1 November 2013 at 6:53 pm Reply

    I’m a bit confused. What knowledge are you referring to that secondary teachers lack in September? The marking criteria remains entirely the same, though we do admittedly get to know the pupils better.

    Do you therefore mean that levelling criteria should be based on what is good for an individual pupil, rather than the baseline criteria? For example, do you mean that because I know Johnny usually struggles with English but, for his usual standard he’s done something really good, then he should get a L4?

    • Michael Tidd 1 November 2013 at 6:58 pm Reply

      rgreenthumb – what I have set out is exactly to do with the problem you raise. The problem with September baselines is that they are necessarily based on limited evidence because of the short time.
      If Johnny struggles with English and has been awarded L3 in KS2, would one good piece of work in the first week of secondary school suggest that he should get a L4? And vice versa?
      The criteria should indeed be the same, but the evidence base is necessarily not. Too often each side presumes that only their evidence base is valid.

      • rgreenthumb 1 November 2013 at 7:16 pm Reply

        I think this is exactly why the data doesn’t marry up and the blame game gets played. With GCSEs very little is cumulative (and now controlled assessments/coursework is going this will be even more the case) and so we expect pupils to be able to perform to their best standard in one of assessments. We can’t assess and level pupils by saying “he uses paragraphs in this piece and great vocabulary in that piece” because we’d be training them to fail at GCSE where they won’t be assessed this way. I’m not saying this is right (it’s not) but it’s the way of the exam system.

        I think one of the problems with the transition and discrepancy in data is therefore a change in expectation. Not in terms of what you have to do to get a L4, but in terms of when you have to do it.

        Of course, this does all boil down to the issue of communication between Primary and Secondary. If it weren’t for Twitter, (and my boyfriend being a Primary teacher, but mostly Twitter😉 ) then I wouldn’t know this difference existed.

        • Michael Tidd 1 November 2013 at 7:28 pm

          I agree that the differences between one-off and on-going assessments are significant – although notably that’s only a factor in Writing, since the same one-off rule applies to Maths and Reading at KS2.
          The APP assessments should mean that children are consistently demonstrating all those features, but it’s not possible to do all of them all the time (e.g. speech marks in a speech-free piece of text).
          I agree, though, that there are some issues here. I found particularly with Reading that the ability to discuss ideas in a group is very different from being able to article the same ideas clearly in writing, which is more expected at KS3/4, but that doesn’t equate to gaming or cheating.
          Which is why, as you rightly say, the communication between sectors can make a big difference. Or cross-phase relationships, although that may be a step too far for some😉

  3. nancy 1 November 2013 at 9:31 pm Reply

    An interesting interpretation – I see/hear a lot of these discussions re: moving to next calss, and between KS1 and 2. There is a danger, I think, of Y6 teachers throttling back in the summer term, which should be ameliorated somewhat by the new arrangements for KS2 writing.
    My own interpretation is that we teachers find it almost impossbile to assess children objectively – especially in writing. Of course we find it difficult! We know these children. They are more than data to us.
    I think this is why we have the state of play we do at present with the teaching of writing. In order to free ourselves from subjectivity we have plumped for a ‘checklist’ approach, where we tick off the elements of good writing that we see. And yet, I beleive, this is equally subjective – who is to say WHAT a higher/lower level connective is? The ‘levelness’ of writing, and the judging of its quality, can also be a subjectitve exercise.
    And you are right when you say that context is everything. Expressing an opinion in a group reading session is very different to writing it down, and children need time to get used to the transition…in additon to the transition between schools.
    I don’t like this idea that children make linear progress. It’s not subtle enough. They need time to settle, and we need time to get to know them.

  4. Tim Taylor 21 November 2013 at 4:03 pm Reply

    Reblogged this on Primary Blogging.

  5. […] A significant issue with the National Curriculum tests and KS2 teacher assessments is that they create a divide between primary and secondary professionals at the exact point that they need to be working together most for the benefit of children. Many primary teachers believe the data is not only not used by their secondary counterparts but actively replaced by other information gleaned from other tests. Secondary teachers, meanwhile feel that the data is unreliable due to inflation resulting from the high stakes nature of the results. Both sides of this argument are explored really well here.  […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: