Category Archives: Education

Will we see a leap in Writing attainment?

I’ve long been clear that I think that the current system of assessing writing at KS2 (and at KS1 for that matter) is so flawed as to be completely useless. The guidance on independence is so vague and open to interpretation and abuse, the framework so strictly applied (at least in theory), and moderation so ineffective at identifying any poor practice, that frankly you could make up your results by playing lottery numbers and nobody would be any the wiser.

One clear sign of its flaws last year was in the fact that having for years been the lowest-scoring area of attainment, and despite the new very stringent criteria which almost all teachers seem to dislike, somehow we ended up with more children achieving the expected standard in Writing than in any other subject area.

My fear now is that we will see that odd situation continue, as teachers get wise to the flaws in the framework and exploit them. I’m not arguing that teachers are cheating (although I’m sure some are), but rather that the system is so hopelessly constructed that the best a teacher can do for their pupils is to teach to the framework and ensure that every opportunity is provided for children to show the few skills required to reach the standard. There is no merit now in focusing on high quality writing; only in meeting the criteria. Results will rise, with no corresponding increase in the quality of writing needed.

For that reason, I suspect that we will likely see a substantial increase in the number of schools having more pupils reaching the expected standard. At Greater Depth level I suspect the picture will be more varied as different LAs give contradictory messages about how easy is should be to achieve, and different moderators appear to apply different expectations.

In an effort to get a sense of the direction of travel, I asked teachers  – via social media –  to share their writing data for last year, and their intended judgements for this year. Now, perhaps unsurprisingly, more teachers from schools with lower attainment last year have shared their data, so along with all the usual caveats of what a small sample this is, it’s worth noting that it’s certainly not representative. But it might be indicative.

Over 250 responses were given, of which just over 10 had to be ignored (because it seems that some teachers can’t grasp percentages, or can’t read questions!). Of the 240 responses used, the average figure for 2016 was 71% achieving EXS and 11% achieving GDS. Both of these figures are lower than last year’s national figures (74% / 15%) – which themselves seemed quite high, considering that just 5 years before, a similar percentage had managed to reach the old (apparently easier) Level 4 standard. Consequently, we might reasonably expect a greater increase in these schools results in 2017 – as the lower-attaining schools strive to get closer to last year’s averages.

Nevertheless, it does appear that the rise could be quite substantial. Across the group as a whole, the percentage of pupils achieving the expected standard rose by 4 percentage points (to just above last year’s national average), with the percentage achieving greater depth rising by a very similar amount (again, to just above last year’s national average).

We might expect this tendency towards the mean, and certainly that seems evident. Among those schools who fell short of the 74% last year, the median increase in percentage achieving expected was 8 percentage points; by contrast, for those who exceeded the 74% figure last year, the median change was a fall of 1 percentage point.

Now again, let me emphasise the caveats. This isn’t a representative sample at all – just a self-selecting group. And maybe if you’re in a school which did poorly last year and has pulled out all the stops this year, you’d be more likely to have responded, so it’s perfectly possible that this overestimates the national increase.

But equally, it’s possible that we’ll see an increase in teacher assessment scores which outstrips the increases in tested subjects – even though it’s already starting from a higher (some might say inflated) base.

I’m making a stab in the dark and predicting that we might see the proportion of children – nationally – reaching the Expected Standard in Writing reach 79% this year. Which is surely bonkers?

Stop moaning about tests!

Today marked the end of 4 short days of testing. For Year 6 pupils everywhere, they’ll have spent less than 5 hours on tests – probably not for the first time this year – and later in the year we’ll find out how they did.

Now, I’m the first to complain when assessment isn’t working, and there are lots of problems with KS2 assessment. Statutory Teacher Assessment is a joke; the stakes for schools – and especially headteachers – are ridiculously high; the grammar test is unnecessary for accountability and unnecessarily prescriptive. I certainly couldn’t be seen as an apologist for the DfE. And yet…

For some reason it appears that many primary teachers (particularly in Facebook groups, it seems) are cross that some of the tests contained hard questions. I’ve genuinely seen someone complain that their low-ability children can’t reach the expected standard. Surely that’s the very reason they’re defining them as low ability?

Plenty of people seem annoyed that some of the questions on the maths test were very challenging. Except, of course, we know that some children will score 100% each year, so the level of challenge seems fair. There were also plenty of easier, more accessible questions that allowed those less confident mathematicians to show what they can do. It’s worth remembering that to reach the expected standard last year, just 55% of marks were needed.

But the thing that annoys me most is the number of people seemingly complaining that the contexts for problem-solving questions make the questions too difficult. Of course they do, that’s the point: real maths doesn’t come in lists of questions on a page that follow a straightforward pattern. What makes it all the more irritating is that many of those bemoaning the contexts of problems are exactly the same sort who moan about a tables test, complaining that knowing facts isn’t worthwhile unless you can apply them.

Well guess what: kids need both. Arithmetic knowledge and skills need to be secure to allow children to focus their energies on tackling those more complex mathematical problems. You can’t campaign against the former, and then complain about the latter.

The tests need to – as much as possible – allow children across the ability range to demonstrate their skill, while differentiating between those who are more and less confident. That’s where last year’s reading test fell down: too few accessible elements and too many which almost no children could access. This year’s tests were fair and did a broadly good job of catering for that spread. For those complaining about the level of literacy required, it’s worth remembering that questions can be read to children, and indeed many will have had a 1:1 reader throughout.

No test will be perfect, and there are plenty of reasons to be aggrieved about the chaos that is primary assessment at the moment, but blaming tests because not all children can answer all questions is a nonsense, and we’d do well to pick our battles more carefully!

Platitudes don’t reduce workload

There’s no denying that workload remains a significant issue in our profession.  However, the solutions are not to be found in platitudes and pleasantries.

Two popular solutions have cropped up this weekend and both need dropping.

The first is slightly tangential, and focuses in theory on wellbeing. The problem with that is that the biggest threat to teachers’ wellbeing is workload. Reduce the workload, you’ll reduce the issue.

The TES ran a column this week that include ideas such as laughing yoga and ‘star of the week’. Now, if ‘star of the week’ is the sort of thing that floats your boat, then knock yourself out. Personally, I’d find it cringy or patronising. Similarly, with yoga, if that’s for you, then great. As a way of improving my wellbeing, it reminds me of the course I attended as an NQT where we were told that massage would be a good relaxation technique, before being paired up with complete strangers to practice massage techniques. I assure you, I did not feel relaxed!

If teachers want to use yoga to find inner peace and relaxation, then wouldn’t the best thing we could do as schools be to ensure that teachers have enough time left in their week to attend yoga classes in their own time?

The second solution which comes up every now and then is the barmy notion that Ofsted should judge schools on how they reduce workload. Can you imagine the nonsense of it?

As I’ve said before, in recent years Ofsted has done a good job of clarifying its expectations (both for schools and inspectors), so it is now rarely the cause of the problem.

However, Ofsted cannot be the solution either. Excessive workload is often a matter of weak leadership. Confident headteachers will make decisions about policies on things like marking, data and planning which focus on benefit for pupils in relation to time and effort costs, which align with the recommendations of the DfE’s workload reports. That’s great. But where weak leaders fail to follow such guidance, they’re also likely to get it wrong when it comes to Ofsted judging their efforts.

A poor headteacher who thinks that draconian marking or planning policies are useful, is just the sort of headteacher who might think that locking up the school at 5pm every night is a helpful workload-reduction technique. Just because you can’t be in the building doesn’t make that workload disappear, but it might appear a good strategy at first glance.

The problem is, with all the best intentions, as soon as you make a measurable goal of reducing workload, you actually create a task of headteachers being seen to act on workload. The school who never had a bonkers policy gets no credit, while the crazy head who insists on scrutinising every lesson plan gets to claim that he’s made it easier by allowing you to upload them rather than print them in triplicate.

As my TES column last autumn was headed: Want to reduce workload? Reduce work.

KS2 Writing: Moderated & Unmoderated Results

After the chaos of last year’s writing assessment arrangements, there have been many questions hanging over the results, one of which has been the difference between the results of schools which had their judgements moderated, and those which did not.

When the question was first raised, I was doubtful that it would show much difference. Indeed, back in July when questioned about it, I said as much:

At the time, I was of the view that LAs each trained teachers in their own authorities about how to apply the interim frameworks, and so most teachers within an LA would be working to the same expectations. As a result, while variations between LAs were to be expected (and clearly emerged), the variation within each authority should be less.

At a national level, it seems that the difference is relatively small. Having submitted Freedom of Information Requests to 151 Local Authorities in England, I now have responses from all but one of them. Among those results, the differences are around 3-4 percentage points:

moderated

Now, these results are not negligible, but it is worth bearing in mind that Local Authorities deliberately select schools for moderation based on their knowledge of them, so it may be reasonable to presume that a larger number of lower-attaining schools might form part of the moderated group.

The detail that has surprised me is the variation between authorities in the consistency of their results. Some Local Authority areas have substantial differences between the moderated and unmoderated schools. As Helen Ward has reported in her TES article this week, the large majority of authorities have results which were lower in moderated schools. Indeed, in 11 authorities, the difference is 10 or more percentage points for pupils working at the Expected Standard. By contrast, in a small number, it seems that moderated schools have ended up with higher results than their unmoderated neighbours.

What can we learn from this? Probably not a great deal that we didn’t already know. It’s hard to blame the Local Authorities: they can’t be responsible for the judgements made in schools they haven’t visited, and nor is it their fault that we were all left with such an unclear and unhelpful assessment system. All this data highlights is the chaos we all suffered – and may well suffer again in 2017.

To see how your Local Authority results compare, view the full table* of data here. It shows the proportions of pupils across the LA who were judged as working at the Expected and Greater Depth Standards in both moderated and unmoderated schools.


*Liverpool local authority claimed a right not to release their data on the grounds of commercial sensitivity, which I am appealing. I fully expect this to be released in due course and for it to be added here.

Some thoughts on the Primary Assessment Consultation

Pub Quiz question for the future: In what year did the primary assessment framework last not change? (Answers on a postcard, folks)

I may not always be the most complimentary about the DfE, but today I feel like there is a lot to praise in the new consultation on primary assessment. They have clearly listened to the profession, including the work undertaken by the NAHT assessment review, and have made some sensible suggestions for the future of primary assessment. As ever, I urge people to read the consultation, and respond over the next 12 weeks. Here, I’ve just shared a few thoughts on some key bits:

Assessment in the Early Years

For years, I feel like Early Years practice was held up as a shining example of assessment, as we were all wowed by their post-it notes and online apps, and all the photographs they took. I was never overly keen on all the evidence-collating, and I’m pleased that we’ve begun to eschew it in the Key Stages. It’s pleasing, therefore, to see that while the department is happy to keep the (actually quite popular) Early Years Profile, it wants advice on how the burden of assessment can be reduced in the Early Years.

I’m also pleased to see the revival of the idea of a Reception baseline. Much damage was done by the chaotic trial of different systems in 2015, but the principle remains a sensible one to my mind. I would much rather see schools judged on progress across the whole of the primary phase. It’s also quite right that baseline data shouldn’t be routinely published at school or individual level. The consultation seems open to good advice on how best to manage its introduction (an approach which might have led to greater success with the first attempt!).

Key Stage 1

I wasn’t certain that we’d ever persuade the DfE to let go of a statutory assessment, but it seems that they’re open to the idea. I do think that the KS1 tests – and the teacher assessment that goes along with them – are a barrier to good progress through the primary years, and I’d welcome their abandonment. The availability of non-statutory tests seems a sensible approach, and I’m happy to see that the department will consider sampling as a way to gather useful information at a national level. Perhaps we might see that rolled out more widely in the long term.

I’d have rather seen them take the completely radical option of scrapping the statutory tests straight away, but I can see the rationale for keeping them until the baseline is in place. Unfortunately that means we’re stuck with the unreliable Teacher Assessment approach for the time being. (More of that to follow)

Key Stage 2

Of course it makes sense to scrap statutory Teacher Assessment of Reading and Maths. Nobody pays it any heed; it serves no purpose but adds to workload. I’d have preferred to see Science go the same way, but no such luck. At the very least, I hope there is a radical overhaul of the detail in the Science statements which are currently unmanageable (and hence clearly lead to junk data in the extreme!)

There is also some recognition in there that the current system of Teacher Assessment of Writing is failing. The shorter term solution seems to be a re-writing of the interim frameworks to make them suit a best-fit model, which is, I suppose, an improvement. Longer term, the department is keen to investigate alternative (better) models; I imagine they’ll be looking closely at the trial of Comparative Judgement at www.sharingstandards.com this year. I’m less persuaded by the trial of peer-moderation, as I can’t quite see how you could ensure that a fair selection of examples are moderated. My experience of most inter-school moderation is that few discussions are had about real borderline cases, as few teachers want to take such risks when working with unfamiliar colleagues. Perhaps this trial will persuade me otherwise?

On the matter of the multiplication check, I don’t share the opposition to it that many others do. I’ve no objection to a sensible, low-stakes, no-accountability check being made available to support schools. I’d prefer to see it at the end of Year 4 – in line with the National Curriculum expectations, and I’d want to see more details of the trials, but overall, I can live with it.

Disappointments

Although it hardly gets mentioned, the opening statement that “it is right that the government sets a clear expected standard that pupils should attain by the end of primary school” suggests that the department is not willing to see the end of clunky descriptors like “Expected Standard”. That’s a shame, as the new scaled score system does that perfectly well without labelling in the same way. Hopefully future alternatives to the current Teacher Assessment frameworks might lessen the impact of such terminology.

Credit for whoever managed to get in the important fact that infant/junior and middle schools still exist. (Points deducted for failing to acknowledge first schools in the mix). However, the suggestions proposed are misguided. The consultation claims that,

the most logical measures for infant schools would be reception to key stage 1 and, for middle and junior schools, would be to continue with key stage 1 to key stage 2

While that may be true for infant, and potentially even junior schools, for middle schools this is a nonsense. Some middle schools only start from Year 6. How can it be sensible to judge their work on just 2 terms of a four-year key stage? The logical measure would require bespoke assessments on entry and exit. That would be expensive, so alternatives will be necessary. Personally I favour using just the Reception baseline and KS2 outcomes, along with sensible internal data for infant/first and junior/middle schools. The KS1 results have never been a helpful or reliable indicator.

Partly connected to that, I would also have liked to have seen a clearer commitment to the provision of a national assessment bank, as proposed by the Commission for Assessment without Levels, and supported by the NAHT review. It does get a brief mention in a footnote, so maybe there’s hope for it yet.

In Conclusion

Overall, I’m pleased with the broad shape of the consultation document. It does feel like a shift has happened within the department, and that there is a clear willingness to listen to the profession and correct earlier mistakes. There is as much positive news in the consultation as I might have hoped for.

If there were an interim assessment framework for judging DfE consultations, then this would have ticked nearly all of the boxes. Unfortunately, of course, nearly all is not enough, as any primary teacher knows, and so it must fall to WTS. Seems cruel, but he who lives by the sword…

Some clarity on KS2 Writing moderation … but not a lot

Not for the first time, the Department has decided to issue some clarification about the writing assessment framework at Key Stage 2 (and its moderation!). For some inexplicable reason, rather than sharing this clarity in writing, it has been produced as a slowly-worded video – as if it were us that were stupid!

Here’s my take on what it says:

Some Clarity – especially on punctuation

  • For Greater Depth, the long-winded bullet point about shifts in formality has to be seen in several pieces of work, with more than one shift within each of those pieces.
  • For Expected Standard, it is acceptable to have evidence of colons and semi-colons for introducing, and within, lists (i.e. not between clauses)
  • For Expected Standard, any of either brackets, dashes or commas are acceptable to show parenthesis. There is no need to show all three.
  • Bullet points are punctuation, but the DfE is pretending they’re not, so there’s no need to have evidence of them as part of the “full range” of punctuation needed for Greater Depth.
  • Three full stops to mark ellipsis are also punctuation, but again, the DfE has managed to redefine ellipsis in such a way that they’re not… so again, not needed for Greater Depth.

A bit of guidance on spelling

This was quite clear: if a teacher indicates that a spelling needs correcting by writing a comment in the margin on the relevant line, then the correction of that spelling cannot be counted as independent. If the comment to correct spellings comes at the end of a paragraph or whole piece, without specifying what to correct, then it can still count as independent.

No clarity whatsoever on ‘independence’

Believe me, I’ve re-watched this several times – and not all of them at double-speed – and I’m still bemused that they think this clarifies things. The whole debacle is still reliant on phrases like “over-scaffolding” and “over-detailed”. Of course, if things are over-detailed then there is too much detail. What isn’t any clearer is how much detail is too much detail. The video tells us that:

“success criteria would be considered over-detailed where the advice given directly shapes what pupils write by directing them to include specific words or phrases”

So we know specifying particular words is too much, but is it okay to use success criteria which include:

  • Use a varied range of sentence structures

Is it too specific to include this?

  • Use a varied range of sentence openers

What about…?

  • Use adverbs as sentence openers

There’s a wide gulf between the three examples above. Which of these is acceptable? Because if it’s the latter, then schools relying on the first will find themselves under-valuing work – and vice versa, of course. That’s before you even begin to consider the impossibility of telling what success criteria and other supporting examples are available in classrooms at the time of writing.

The video tries to help by adding:

“success criteria must not specifically direct pupils as to what to include or where to include something in their writing”

But all of those examples are telling children what to include – that’s the whole point of success criteria.

If I’ve understood correctly, I think all three of those examples are acceptable. But it shouldn’t matter what I think: if the whole system depends on what each of us thinks the guidance means, then the consistency necessary for fair and useful assessment is non-existent.

The whole issue remains a farce. Doubtless this year Writing results will rise, probably pushing them even higher above the results for the externally tested subjects. Doubtless results will vary widely across the country, with little or no relationship to success in the tested subjects. And doubtless moderation will be a haphazard affair with professionals doing their best to work within an incomprehensible framework.

And to think that people will lose their jobs over data that results from this nonsense!


The full video in all its 11-minute glory can be found at: https://www.youtube.com/watch?v=BQ-73l71hqQ

 

Teaching is complex – and that’s okay.

As another list of non-negotiables does the rounds, I find myself again in disagreement with those who would argue that a minimum baseline of expectations is a helpful or necessary thing. Unfortunately, like so many things in life, I don’t think we can distil what is a very complex operation down to few simple ‘must-dos’. Not least because as with all learning, teachers will be at very different stages of their expertise, and one size rarely fits all.

The problem with simple tick-list approaches is that teaching isn’t simple. It’s tempting to say that all lessons should begin with the Learning Objective being shared, but then we can all think of examples where that would ruin the wider structure of the lesson. It’s tempting to say that teacher talk should be minimised, but too often I’ve seen lessons where teachers, worried about time spent on the carpet, rush children off to a task they’re ill-equipped to tackle. It’s tempting to say that every lesson must include differentiated tasks, but then many of us have seen lessons where children are given work that is below their capability simply to show differentiation. Teaching is complex.

Some of the things that make for really excellent teaching are exactly the sort of thing you can’t tick off a list. I think that knowing your children is a key to great teaching and learning. Yes, some inspiring lectures can achieve great things without interaction of any sort, but for the most part, I know that I can teach my own class more effectively than I can an unknown group. But there would be no point in setting out a policy in my school that says you must know your children; that isn’t something you can tick off.

Equally, some of those things that seem straightforward, conceal a whole level of complexity that doesn’t feature on the tick-list. We know that feedback can be highly effective in further children’s learning, but that could come in the form of written marking, or comment in the lessons, or in the way the teacher reacts to off-the-cuff assessments from whiteboard activities. So we could add “You must give feedback” to a tick-list, but what does it mean?

The same is true of sharing Learning Objectives. Making children aware of what you intend them to learn is no bad thing. But what if you’ve picked the wrong thing at the wrong time? What if it doesn’t match the wider sequence? What if the task you’ve planned doesn’t really meet the needs of the learners, or the aims of the lesson? What if it’s something they already know? Sharing a Learning Objective is only going to be of any use if the objective is apposite and taught well.

One argument people make is that schools in difficult circumstances may require basic thresholds. Special Measures is maybe an excuse for such approaches. But in my experience, like in any class, in any school in a category you will find a wide range of ability among the teachers. For those who are teaching brilliantly against the tide, reducing their craft to a mere tick-list may only serve to stifle their brilliance. Equally, for those who are genuinely finding teaching a complex challenge and failing to serve their children well, insisting on a list of gimmicks will not improve practice.

I have seen plenty of lessons – indeed, I’ve probably taught plenty – where a Learning Objective is shared, tasks are differentiated, children are engaged with active learning, peer-evaluation takes place, mini-plenaries are dotted about life confetti… and the net effect on learning is negligible. Equally, I know that some lessons might do none of those things and  be just right for that group at that time.

If we really want to improve teaching and learning, no matter what the current standard, then we need meaty discussion about what we mean by that. For teachers who are struggling, they need to see good teaching in practice, preferably narrated by someone who can highlight its strengths; they need support to change their thinking.

For a teacher who really needs to improve their practice in the classroom, the damage a tick-list approach can cause is substantial. What if that teachers does everything that is demanded of him: his displays are beautiful, learning objectives shared, children think-pair-share, tasks are differentiated… and yet still, the lesson is poorly-taught or the progress is limited by lack of the required prior knowledge. How demoralising for that person to have spent hours refining exactly what you’ve asked, only to be told they’re still failing. Indeed, imagine the difficulty of trying to manage procedures for a teacher who is clearly ineffective, but is good at ticking the boxes you’ve set out!

Teaching is complex.

That’s not to say we shouldn’t try to articulate it. At my own school we have had time dedicated this year to discussing what we think ‘highly effective teaching’ looks like. We’ve discussed learning objectives, and differentiation, and feedback. But we’ve done so in a professional arena where we can unpick what we mean by those terms. We couldn’t reduce it to a simple tick-list, but we recognise some key areas we recognise are important factors.

If a school genuinely has some very weak teachers, then those teachers need specific advice, coaching and support to improve. Good teaching can no more be reduced to a simple tick-list than can good Year 6 writing… and look where that’s got us!

On joining the Chartered College of Teaching

After overcoming a few stumbling blocks, I’ve finally joined the Chartered College of Teaching. I say finally not because of the few days’ delay (my bank apparently thought my signing up might have been a fraudulent use of my card. Do they know me at all!?), but because it strikes me that this is something that’s long overdue.

I’ve always been a member of a teaching union – aren’t we all? – but like so many teachers, that was in part for the protection offered. Unions are there to protect and improve pay and conditions; while they may dress their arguments up in pedagogical terms, the bottom line is the same. And that’s all well and good: that’s their job.

But that conflict also makes it very easy for the government to dismiss what teachers say through their unions – not least the more militant groups with their outlandish demands at conferences. The profession more than ever needs a clear conduit for its opinions and expertise.

But a professional body has to cut both ways. As well as conveying views from the profession to the wider world – from parents to the DfE and Ofsted – it must also offer something to members. I’m pleased to see that the College will provide members with access to educational research, but perhaps more importantly I look forward to a useful professional journal that will help do the job of disseminating that research in ways that can have an impact in classrooms. We’re a time-poor profession as it is, and few of us have time to wade through academic journals on a regular basis;  an intelligent chartered college can be the medium through which teachers receive the very best of information on good practice – and also the very clearest of evidence to dispel the nonsense of the likes of Brain Gym and Learning Styles.

The key thing at this stage is to get people participating. If the college appears not to be the finished article, I’m hoping it’s because it isn’t. I hope, too, that that means teacher members will shape it.

So let me offer a few requests for Dame Alison Peacock and her team as she leads the College in its formative stages:

  • We need you to be brave, Dame Alison, on our behalf. Sometimes that will mean speaking truth to power; asking the difficult questions; putting politicians straight – saying the things we’re all thinking!
  • Focus on the classroom teachers more than the leaders. One of the toughest parts of the job is the solitude of the classroom. The College can be a way for teachers to get a sense of what is happening in other classrooms.
  • Remember the people that so many other organisations forget: the Early Years experts, the SEN schools, the sixth-form colleges, supply teachers, middle schools!
  • Put research and evidence at the heart of work to guide us and others, and be honest when the research doesn’t tell us enough to know.
  • Reach out across the profession, whatever teachers’ experience, across sectors, through the age ranges, the breadth of the country and those who aren’t yet convinced about the College: we’re stronger together.
  • (If truth be told, I’m not taken by the logo, but… maybe it’ll grow on me?)

If you think I’m right – or you think I’m wrong – perhaps you should put your own views across. Join the College at the start.


The Chartered College is currently signing up founder members, who must be teachers in schools, Early Years or post-16 settings: https://www.chartered.college/eligibility

 

National Curriculum Test videos

I’ve updated the videos I made last year to explain the KS1 and KS2 tests to parents. As there is an option about using the Grammar, Punctuation & Spelling tests in primary schools, there are now two versions of the video for KS1 (one with, one without the GPS tests).

Please feel free to use these videos on your school’s website or social media channels, or in parent meetings, etc. There are MP4 versions available to download.

Key Stage 2

Re-tweetable version:

Facebook shareable version:
https://www.facebook.com/primarycurriculum/videos/1311921482187352/

Downloadable MP4 file: https://goo.gl/b0Lo9v

Key Stage 1 – version that includes the GPS tests

Re-tweetable version:

Facebook shareable version:
https://www.facebook.com/primarycurriculum/videos/1311921482187352/

Downloadable MP4 file: https://goo.gl/jo18qk

Key Stage 1 – version for schools not using the GPS tests

Re-tweetable version:

Facebook shareable version:
https://www.facebook.com/primarycurriculum/videos/1311921482187352/

Downloadable MP4 file:  https://goo.gl/xMDFSJ

On Knowledge Organisers

When Jon Brunskill recently agreed to share his work on Knowledge Organisers in primary school, I was excited to see what he came up with. I wasn’t disappointed, and I’m sure many others have been looking with interest. I think there’s a lot of merit in the model, but inevitably I think there is some refining to do.

I say this not as an expert – far from it, I’ve cobbled together one Knowledge Organiser in my life and remain unhappy with it. However, having spoken briefly to Jon about his, I think we both agree that there is merit in unpicking the model further.

Firstly, with Jon’s permission, let me share an image of the organiser he shared (I highly recommend reading the accompanying blog before continuing further with mine!)

At first glance, it looks like a lot of content to learn. I think that’s partly because most of us have spent a good many years teaching broad ideas, and not expecting children to learn detail off by heart. I think there are also very few of us who could hand-on-heart say we know all this content to recall. But I think that represents the shift we need to make rather than something to fear.

That led me to question the purpose behind the Knowledge Organiser. I haven’t spent enough time thinking about them, and certainly not enough time using them, but when I have, I’ve usually considered it a vehicle for outlining the key information that I expect students to learn and retain for the longer term. Often over longer units of work these might include key ideas which are integral to later understanding, whether that’s later in the school year, or later in their education career.

By way of illustration of my thinking, let me share a knowledge organiser I constructed a couple of years ago for my Year 5/6 class

kodraft.png

My first attempt at a Knowledge Organiser in 2015

The differences are quickly obvious. For a start, mine is clearly based on a wider period of teaching, and perhaps more indicative of a basic revision guide, rather than providing content in advance of a unit. I think perhaps that’s also its biggest downfall. It’s worth noting that it’s something I tried and didn’t come back to.

But I think there is maybe a useful middle ground. In Jon’s case, much of the content set out – particularly on the timeline – is content that is useful for the purposes of writing an information text about the event itself (a task which Jon plans to do in his Y2 class). However, I don’t think he expects those students to secure that detail in the very long term. Arguably, this brings the organiser perhaps closer to the cramming model of revision than the more successful spaced practice approach.

Ruth Smith posted a comment on Jon’s blog saying she could imagine the organiser being used as a prompt during writing. While I can see the merits, I do think that the risk then – as Jon would rightly say – is that we replace the value of knowledge with the reliance on someone/something else to do the work for you. That’s not the aim here.

It leaves me wondering what the function of a Knowledge Organiser should be. I’m not persuaded of the value of knowing the date of leaving quarantine after the lunar landing. That said, the value of learning the word ‘quarantine’ is something I think is highly valuable.

The question for me becomes one of later testing (and let me be honest, I’m only at the very beginning of this journey; don’t for a second presume that I’m an expert. I’m a way behind Jon on this!)  In a knowledge rich curriculum, I think one of the key functions of a Knowledge Organiser is to set out the key knowledge that I want students to retain and that I will test for.

We know of the great merit of spaced testing to aid learning, and it strikes me that a Knowledge Organiser should aim to set out that content which would likely later form part of such tests. In the context of Jon’s organiser, I could see merit in testing much of the vocabulary, the date of the landing, and perhaps the names of the crew. However, I’d also want to include some wider context – perhaps a bit more detail behind the Space Race, mention of JFK’s 1960 aim, etc. Might these replace some of the less significant dates of 1969?

Of course, we’re talking about 7-year-olds in Jon’s context. They will lack much of the wider historical knowledge to place events in context, and so there is a risk of expecting too much. But equally, if we train children that knowledge is to be learned, then ought we not be training them to learn it for the long term?

The content I think* I’d like to see on Knowledge Organisers is the detail that I would also expect to use in a brief pop quiz a week later, but also on a test mid-year drawing on prior units, and again at the end of the academic year, or in the first days of the following September. There is a risk that using Knowledge Organisers to aim for short-term recall of detail that is later lost, will develop a cramming ethos, rather than one of long-term storage of information.

What does this mean for Jon’s example? I’m not sure. Maybe a separation of the content that he expects children to retain in the long term from information which would be useful in this context? There is certainly some merit in having this timeline clear in the child’s mind as they are writing – not least because it helps to build a narrative, which is a great learning technique –  but is it necessary for it to be stored in long-term memory? Indeed, is a two-week unit even long enough for such a transfer to be made?

Yet there is unquestionably information here which would be re-used in future that would allow such a long-term retention.

More thinking to do… but well worth doing, I think.


*I say I think, because I am not entirely sure that I won’t think completely differently in six months time.

If you haven’t already, I again recommend reading Jon’s original post here.