Monthly Archives: November 2013

One size does not fit all

Teacher Demands Less Learning With More Tests And No Differentiation
Failures Held Back In Summer Schools

It would make for a nonsense headline, wouldn’t it? There’s pretty widespread agreement in the education world that English schools are subjected to greater volumes of testing than some other nations, including some high-performing ones, so who on earth would suggest that to get better education we need to teach less, but to test more? And then to punish those who don’t meet the new demands?

At first reading, if you were minded to see such headlines in your mind, you could perceive exactly that viewpoint from Joe Kirby‘s excellent blog on mastery learning and assessment this morning. After all, he explicitly says that “All pupils are expected to master all the concepts”, and that “if you have not understood […] you would stay in for summer school”. He specifically argues for ‘teaching to the test’ when he says “create a rigorous assessment, then teach to meet its standards”, and worse, he says there should be frequent testing, and we will be reduced to the use of mundane multiple choice tests. It’s true, that at first reading you could easily convince yourself that Gradgrindian was an understatement and producing automaton students was the way of the future.

Except, of course, if you really read what it says.

I’ve been toying with a mastery model of teaching this term in KS2. It’s at a very embryonic stage and is very limited in scope (i.e. one year group, in one school, led by one person), but the realisations I have reached are already quite significant to how I plan to move forward. And much of what Joe writes about, I recognise quite clearly.

Less Learning?

Firstly, on the matter of ‘less learning’. This seems to be one of the least controversial proposals, and of course it isn’t really about less learning at all, but less teaching and less pointless repetition. Primary teachers will often comment (quite rightly in some cases) about how much time is wasted in Y7 in secondary schools repeating content that has already been taught. However, it’s important that we remember that what has been taught is not what has been learned. We are all familiar with the blank looks of children who claim never to have met a skill or concept before. Sometimes we wonder what the previous teacher or school has been doing; other times we know the children are wrong because it was we who taught it the first time! The reality is that children don’t learn everything we teach, and the more we try to teach, the smaller the percentage they are going to be able to learn. The much-loved analogy of lighting a fire against filling an empty pail is poignant here: piling on the wood won’t make for a good fire if the kindling isn’t alight, any more than splashing vat-loads of water into a pail will necessarily fill it. We need to find the right amount of content to teach to match the learning capability of human minds.

What this means for mastery learning is the teacher(s) taking control of the content and creating the most effective pathway through it to ensure that learning can be guided, built upon, and – most importantly – retained. There are plenty of teachers who recognise that cramming for the tests serves only the purpose of scores on the tests. To create good learning we need to teach less, but better.

More Tests?

That leads us nicely onto the issue of tests. On Twitter today several teachers – with absolutely the right intentions – have queried yet another increase in tests. In an already well-tested system, surely creating more tests would be a bad idea?

Firstly, let me correct my use of “well-tested”. We may well have a highly-tested system in some respects, but that doesn’t mean we do it well. At primary level, the QCA optional tests are great if you want to allocate scores and levels. They’re pretty useless as a formative teaching tool, though. What Joe and others are suggesting is not just more levelled assessments to create scores, but more useful and focussed assessment.

In my own class this year that has meant a mixed economy of testing, all of which I think is entirely appropriate for primary schools. Which brings me to the title of the blog: one size does not fit all. While the overall theory of mastery can be relevant from Early Years to University, the application of it will vary massively. In the Early Years classroom teachers will doubtless encourage children to write their name frequently in many contexts. EY specialists know that they need to provide these opportunities for practice and mastery. When teaching other letter forms they know that once isn’t enough, and that they need to re-visit them all. We can call that repeated practice testing. It doesn’t look like a secondary-school exam, but it serves a purpose to assess what a child can do and to know when they are ready to move on.

I have used a variety of testing forms this year. At half-term I did give my students a paper test, with questions and boxes for answers. But not just a standardised test paper to get a score. I selected and created questions which matched the content I had taught. I wanted to see which students could still use column addition outside of the context of a two-week block on addition. I also wanted to know which of them really understood why we have phases of the moon. I don’t think a paper test actually does any damage, but I do think that to be of any use the test needs to match the curriculum and the students, not just the national benchmarks.

Alongside that single assessment period, I’ve used a host of other techniques. Occasionally I have set a starter in a maths lesson which has essentially been 5 questions based on content that I’ve taught in the past weeks. That’s a technique that is not uncommon in primary schools. We don’t usually call it a test (I called it a “review” with my children), but that’s essentially what it is.

I’ve also set some additional multiple choice question tests via our learning platform. 10 questions per week, entirely optional at the moment for students, but I’m minded to move to making that one of the weekly homework tasks. I have not made the tests particularly demanding, I’ve called them “quizzes” and I’ve praised those who have taken them (they do so because they enjoy them!). They are, in effect, tests which have given the students an opportunity to revisit their learning, to freshen in their minds a particular concept, and to use the structure of multiple choice to review learning in a low-stakes environment.

What Joe suggests as frequent testing may sound abhorrent if your first thought is to imagine a termly QCA optional paper. But if you tried to use a QCA optional to assess mastery learning, then you’ve entirely missed the point. Frequent testing is not about frequent scoring, or ranking, or catching out; it’s about frequent formative assessment. And in some cases, not even that. Just the process of having to recall information for the test will be beneficial to students’ learning.

Scrap differentiation / Hold back students

Another aspect of mastery which can easily be misunderstood is the expectation that all students which reach a threshold. We have become accustomed over the years to a largely differentiated curriculum  (with setting encouraged by central government), and at first glance the mastery model could appear to undo this entirely.

I have written previously about the scourge that is differentiation in our schools. I genuinely believe that some uses of differentiation are positively harmful to students. Low expectations of under-achieving students can serve to exacerbate their struggles. Of course, there are also excellent examples of a well-differentiated curriculum helping to close the gap between lower and higher achievers. That is exactly what mastery learning calls for. It is intended to highlight those who are at risk of not meeting the required thresholds, and then –  most importantly – putting the support in place that is necessary to ensure that they do. Of course, how that support arises will be a challenge for schools to master. But frequent low-stakes testing has got to be a better method of identifying those who need additional support sooner, rather than waiting for an end-of-year – or worse, end-of-key-stage – test result to highlight names to be added to an SEN register.

The call for summer schools may sound cruel, but actually could form part of a serious and robust system for supporting students. If we got differentiation and support right from Early Years on, then fewer children would find themselves in need of giant leaps of support later on.

There are many who praise the Finnish model of education, and surely we can agree that one of its strengths is its excellent provision for students who begin to fall behind. Something like a quarter of students receive intervention at some point. And it isn’t true that there are no tests in Finland; only that there are hardly any standardised tests. Teachers still write tests, based on their curricula, and still assess students. How else could such an intensive support system work?

Testing doesn’t need to be nasty, and it doesn’t need even to be “tests” as we might first imagine it. All that Joe – and others who support the mastery model – are calling for is regular, well-planned, purposeful assessments. Of course, in a secondary school with varied teaching groups and limited timetabled hours for each subjects, straightforward pencil-and-paper assessments can be a really useful model. And frankly, secondary school children won’t be damaged by it.

Naturally, for those of us in primary, we need to adapt these ideas to fit our context. We have a massive advantage in only having to think about 30 children a year. We have the advantage of upwards of 20 hours per week to get to know them. We may not need testing in the same manner as the secondary RE teacher with 600 students. But that doesn’t mean that because the word ‘test’ is used that the whole idea is inherently evil.

One size does not fit all… but then, nobody said it should.

Advertisements

My real worry about the loss of levels.

There are those who are gleeful to be rid of levels. Mostly secondary school teachers who know their subjects intimately, and who increasingly find their focus is on GCSE grades. It does, after all, seem daft to have two different systems of assessment in one school. I could probably have supported the abandonment of levels in secondary schools, if only because I think most teachers don’t need them particularly.

But – as the DfE is slowly beginning to realise, hopefully – primary schools have teachers in them too. Removing levels from primary schools leaves us with nothing. We don’t have GCSE grades to work towards (not that we’d want to) and we don’t want to spend 7 years tracking towards an incomprehensible scaled score. Levels may not have been perfect, but they were something.

That said, what really worries me is something else: hurried decision-making.

As schools scratch around looking for the next big thing in assessment, I have no doubts that private publishers are already beavering away at producing something. Now, I reckon I could come up with a system I could work with in a couple of days, that would be at least as useful, and a darn site more manageable than the current levels system. But it wouldn’t sell.

The real worry we face is that each company trying to sell its wares will be competing to be the scheme that looks the best. They will be filled with promises about progress, tracking, accountability and excellence. And the problem is that what looks really good on paper (and some schools will probably buy into schemes before they’re even completed) is often the same thing that becomes entirely unmanageable and unhelpful in practice. By which time it’s too late.

Each company will gladly produce folder after folder. Doubtless many of them will come with discs of software for analysis, and printable graphs, and iPad apps and the like. The impression will be that teachers will just casually tap a few buttons and magically all will be done. Except, in reality those simple ‘tap-a-button’ schemes soon become tap-a-thousand-buttons screens.

Think APP and then some.

No, scrapping levels is a disaster, having no scheme is horrendous, but the real worry is what might come after it – not least because too many of the people buying into the schemes will not be the same people who are expected to use them!

Primary Bloggers rise up

Blog Responses to “The Elephants in the Primary Blogosphere”

(Updated 12 November)

Following my post yesterday, a few primary bloggers have begun to respond on their own blogs, and I’m hoping several more will. Meanwhile, I’ll do my best to collate some of the responses here rather than just though the comments, in the hope that more people see them.

I am not an elephant….I’m a primary school headteacher! (@theprimaryhead)
http://theprimaryhead.com/2013/11/10/i-am-not-an-elephant-im-a-primary-school-headteacher/
The Primary Head offers some responses to all of the questions I raised – in brief!

It’s a *mute* point (@philallman1)
http://madphil.wordpress.com/2013/11/10/its-a-mute-point/
Phil’s usual straight talking gets to the nub of why it’s important that teachers take note and take part.

In search of the Primary School blogger… (@educationbear)
http://educationbear.wordpress.com/2013/11/10/in-search-go-the-primary-school-blogger/
Nick Hague considers some of the reasons why blogs might be absent… and then begins to fill the gap!

The Primary Elephant Marches On… (@mr_chadwick)
http://mrchadwickblogs.blogspot.co.uk/2013/11/the-primary-elephant-marches-on.html
Following The Primary Head’s lead, Mr Chadwick tackles each issue in brief

Primary Bloggers, where are you? (@cherrylkd)
http://cherrylkd.wordpress.com/2013/11/11/primary-bloggers-where-are-you/
CherryLKD considers reasons for the dearth of primary bloggers… raises her own call to action

Primary Juggling (@primaryjuggler)
http://primaryjuggler.wordpress.com/2013/11/10/primary-juggling/
New blogger Primary Juggler explains some of the issues.. and begins her own blogging journey

Thoughts on a hot school lunch (@ajjolley)
http://notveryjolley.wordpress.com/2013/11/11/thoughts-on-a-hot-school-lunch/
A first blog on one of the big issues raised: the manageability (or otherwise) of launching free lunches for all infant school pupils.

Does age really matter? (@AlisonMPeacock)
http://alisonpeacock.net/2013/11/does-age-really-matter/
Alison Peacock points out – importantly – that there is much common ground between the two sectors, and we ought to pay it more heed.

Assessment without Levels (@misshorsfall)
http://misshorsfall.wordpress.com/2013/11/12/assessment-without-levels/
Tackling one of the big issues I raised, misshorsfall raises questions about the future of primary assessment (Like most of us feel: lots of questions, not many answers!)

Education Bear, Nick Hague, has also set up a twitter feed for primary blogs which may be worth watching: https://twitter.com/BloggerPrimary

The Elephants in the Primary Blogosphere

This morning, Sam Freedman (@samfr1) – director of research at TeachFirst and former advisor to Michael Gove – posted an excellent list of 75 people to follow on Twitter. It is well worth a look, regardless of your sector, but as always with such lists, people are quick to point out those folk who are missing. In my case, it was to ask Sam his view on a whole group of people who were missing: Primary Teachers. From the whole list I could only see one or two people who had any experience of primary schools at all, and not a single practising primary teacher or head.

After a bit of discussion, we reached some sort of a conclusion:

It seems that while there are a reasonable number of primary teachers and headteachers on Twitter, not a huge number of them are tweeting or blogging about the big policy issues in education.

There is, of course, always going to be room for blogging on the more practical day-to-day matters of teaching and learning. For example, I have often enjoyed posts on use of stampers for assessment, activities for teaching calculations, and classroom displays, but as Sam says, these are not the matters of big policy and substantial change in education.

What this whole discussion has raised, though, is the fact that very few primary bloggers are writing about these matters, and relatively few twitter-users are tweeting about them. My suspicion is that part of the cause is that it’s hard to keep up with the pace of change. Many of the secondary bloggers are leadership types who have a greatly reduced timetable and a role that involves keeping on top of such developments; most primary teacher users seem to be full time classroom teachers.

With this in mind, I’ve decided to set out some of the big issues that I think we ought to be discussing that just aren’t coming up on blogs and tweets that pass my eye. I’m hoping that it might inspire a few other teachers in the primary sector to share their views on these matters – and who knows, maybe even end up on Sam’s hotlist?

If anyone fancies writing a single post, but doesn’t have a blog of their own, I’m always happy to host a guest post!

Issues that could become blog topics

  • Will ‘scaled scores’ provide useful information at end-of-key-stage tests?
  • How will we assess English and Maths once levels are scrapped?
  • Is primary schooling becoming all core and no breadth?
  • Does the new National Curriculum necessarily more rote teaching & learning?
  • Will the new grammar requirements in the National Curriculum raise standards of reading/writing?
  • Do primary teachers have the subject knowledge needed for the new National Curriculum?
  • What does it mean to be “secondary ready”, as the DfE suggests we should be aiming for?
  • Is the current level 4b a viable expectation for 85% of students?
  • How is the newly-enhanced Pupil Premium going to have an impact in primary?
  • How can we use the new sports/PE funding effectively?
  • How can research findings about feedback/knowledge/learning be applied in primary classrooms?
  • What impact are small cohorts or small sub-groups having on Ofsted inspection outcomes?
  • Are stand-alone primary academies viable?
  • What is the professional view on baseline assessments for children on entry to YR?
  • What are the issues related to the proposed free school meals programme for infants?
  • What does constitute effective use of teaching assistants?

There’s certainly no shortage – and doubtless others will have their own ideas on what big issues need addressing. It’s amazing to see how quickly a starting blog or tweet can become a wide-ranging discussion that brings about real insights – and even change. If the Department for Education can see the benefit on keeping an eye on tweets and blogs, then as primary teachers*, let us make sure that we use these channels to make clear our views, both individually and as a sector.

*I’ve spent the last few years refusing to be called a primary teacher, since I teach in a middle school, but I do teach in KS2 now, so I’m joining the gang.

Let’s Stop the Blame Game

Transition is a bit of a bugbear of mine. It is frequently problematic, recognised as such, and yet rarely tackled. Every few weeks on Twitter someone somewhere will post some claim or reference to the inaccuracy of Key Stage 2 test results. I – and often a few others – will step in and point out that there are possible reasons for discrepancies other than the implied cheating or similar, and then someone else will claim that National Curriculum levels are the problem, and that they just aren’t the same in primary and secondary schools.

This is nonsense. Levels are broad, they have flaws, and are only roughly estimated by test results, but they are unquestionably the same.

That said, it is clear that there is an issue with discrepancies between levels that are awarded at the end of Key Stage 2, and those which are perceived by secondary teachers in the early weeks of Year 7. It’s quite possible, of course, that in some cases that is down to either error or more scurrilous method in the last weeks of primary education, but that’s quite a conclusion to leap to. So what other explanation?

I spent the last 7 years teaching Year 7 in a middle school, where the primary teachers shared the staffroom, corridors and playground duties with the KS3 staff. There is no room there for cheating which will be picked up in a neighbouring classroom. Yet still each Autumn I would see work in books which did not always reflect the National Curriculum level on the spreadsheets – and the reasons were varied. For example,

  1. George is a case in point. He came to me with a Level 5 in Writing, and yet I saw no evidence of it in his written work. Pieces were often brief, poorly-constructed and lacked detail. The obvious conclusion would have been to presume that he had never been working at Level 5. Except I knew the person who had awarded it, and had every faith in their professional judgement.
    The reality was that George was coasting. He’d been driven to succeed in the KS2 assessments (he’s happy to pull the rabbit out of the hat on cue) and had taken his foot off the pedal. Of course, a conversation with his Year 6 teacher soon put me straight, and in turn we were able to put him back on track. I was able to show him his KS2 Writing test paper and explain that I expected similar quality from him from then on.
  2. In my first year at a new school, I had a girl in my top maths set who seemed to struggle. The sets had been organised by raw test score and she had scored well in KS2, yet was struggling to keep up with others in her set. A quick check of the teacher assessment data showed that her secure Level 5 on the test did not match the teacher assessment of high 4. She’d got lucky in May, and was paying for it now by being in the wrong set. She moved, made good progress, and was working comfortably at the top of level 5 by the time she moved to Y8. Her test score had been “wrong”; the teacher assessment was right.
  3. Harry came to me marked as a L3 reader, yet he seemed much stronger. By the second week of term, I was already querying his low level. In fact, it was a simple case of an error when moving rows about on Excel. The error was entirely mine, but the inaccuracy could easily have been blamed elsewhere, and then ignored.

But these are anomalous examples, one might argue. The reality is that kids are boosted to reach the levels of the tests and then fall back because they do no work after them until September. And there may be some truth in that. The problem, though, comes when we presume that forgotten knowledge is the same as un-met knowledge. There is a great risk that taking benchmarking assessments at the start of Year 7 as ‘gospel’ runs the risk of setting too low a starting point for many children.

Aside from the massive social and cultural impacts of starting secondary school (and these are surely undeniable), the simple use of a test in the early weeks of a new term, let alone a new school, is likely to be an unreliable marker of future success. Sure, it will help to bolster the apparent progress figures, but that isn’t the same as achieving progress.

If I were asked to sit an A-level French exam this week, without warning, I would probably struggle. I may not do better than the average newcomer to A-level French. But that doesn’t mean that my two years of learning have been eradicated. Some it will be lost and will need re-teaching, but a great deal more will come back with prompting. Just because the knowledge isn ‘t there at the starting gun, doesn’t mean it all needs to be taught from scratch.

September assessments are simply not that accurate, it seems. I worked closely with the data manager at one of our receiving high schools and looked at their teacher assessments for September and February of the first academic year they had our students. In the September assessments more than half were assessed at the same level, or above, as our final assessment – but nearly half were assessed at a sub-level or more below. The variation was on average in excess of a whole level (6.2 APS points). It made it seem that our data could have been plucked at random. One child – Sally – had been awarded 6C in Reading by us, and was levelled at 3B by the high school; Similarly, Tom, whom I’d awarded 4C, had been assessed at 5A. The discrepancies were bountiful.

But the February data told a very different story. After 6 months in the school, the average discrepancy between our June assessments and their February assessments was just 3 APS points. Only two children had been assessed as working at a lower level than their final level with us (one of whom, oddly, had been given identical levels in June and September!). The remainder had made an average of just over 1 sub-level of progress – much as you might expect part way through a successful year. The two extreme cases I mentioned above had been clear outliers, and were soon back to within a sub-level of the June assessments we had made.

The September data had – to me, who knew the children well – seemed erratic and confusing. To the secondary teachers who had known the children no more than an hour or two, and were forced to base their assessments on single examples of work, they were virtually meaningless. The February results looked recognisable to me, and more accurate to them.

I could, then, try to argue that it is secondary teachers who are wilfully under-assessing in September to make their results look good. The reality, I rather suspect, is very different. Teachers were forced to assess based on limited knowledge, and so, inevitably, accuracy suffered. Yet it is quite likely that some of those same teachers will have made comments about primary school data reliability on seeing their September data.

The implications could, of course, be significant. Take Sally, the child who had been awarded 6C by us, but mid level 3 by the high school. Had they taken that baseline as accurate, regardless of our information, they might have expected her to struggle at GCSE. Their February assessment of 6B showed that this clearly wasn’t to be the case, but the risks were there all the same. Setting decisions based on rushed assessments in September could have been catastrophic for her. But equally, there might have been many more students with smaller errors, but mis-assessed all the same, whose progress was then held back by that initial low expectation.

It’s interesting to note that whenever the complaints arise on Twitter and elsewhere, it is always the children who seem to have “over-performed” at KS2 who are mentioned; how many like Tom get simply forgotten because the data seems more favourable? The blame game might be less dangerous if we always presumed that the higher result were more accurate, but I suspect that that is rarely the case.

Of course, what makes the system different within my own school is that we talk to each other across the key stage boundary. If I see an anomalous result, or a child who appears not to be working at the expected level, then I would think it only normal to speak to the previous class teacher. If only the same happened more frequently between schools.

The challenges of transition are manifold. But let’s not just presume that the difficulties are all caused by those on the other side of the divide; reality is not quite like that.