Monthly Archives: February 2015

Performance Descriptors on hold?

I’m not known for my generosity towards the department, but let me state straight off that I’m impressed that it has had the courage to do what’s right in respect of the Performance Descriptors.

After a lacklustre start, eventually 880 responses to the consultation were received, and the message was overwhelming. At least three-quarters of responses said they were unclear or confusing, inappropriately spaced and difficult to understand. Even the free-text response box – the one likely to be left empty – led to around 300 people complaining that they were not fit for purpose.

But we feared the worst. As Warwick Mansell reported in the Guardian last month, we knew that there were doubts about the descriptors, but the worry was that they might be pushed through anyway. So it should be cautiously welcomed that the descriptors will not be rolled out in their current form – at least, not now.

Of course, the matter remains of what is to be done. And there I still have doubts.

We had the announcement yesterday of a commission to support schools with assessment after levels. There are a few questions here. Firstly, it isn’t clear whether the commission is intended to look at primary assessment, or both primary and secondary. The press release title suggests the former, other comments the latter.

Secondly, who is to be on this commission? Nick Gibb described it as a teacher-led commission, but the only appointment so far publicised is a former secondary headteacher, and one who’s been retired for 8 years at that! I don’t hold with the view that headteachers are not teachers, but it’s certainly fair to say that very few headteachers deal with the day-to-day business of assessment in the classroom. If the commission is made up of apparent cronies – or worse, remains secretive as so often in the past, it will be hard to persuade teachers that sensible decisions are being made.

Thirdly, how does the commission’s work fit in with the “assessment experts” who will advise the department on how to move forward with teacher assessment at the end of each primary Key Stage? And who are those experts to be? Will they be the same ones who wrote the flawed descriptors in the first place?

Alongside this, the government response appears to suggest in places that the problems with Performance Descriptors are due not to the failings in the descriptors themselves, but in teachers’ understanding of them. That is unequivocally not the case. The descriptors were confused, unhelpful and a genuine obstruction to good assessment and teaching – going some way to contradict the government’s intention to avoid excessive pace in schools. I would like to see confirmation that the performance descriptors in their current form will definitely not be implemented.

Another issue was raised today by @GiftedPhoenix, relating to the fairly recent proposal from the Workload Challenge project to ensure longer lead-in times for major changes:

[tweet https://twitter.com/GiftedPhoenix/status/570907293908393984 hide_thread=true]

So, the department is not off the hook yet.

But let me say again: they ought to be congratulated for at least having the courage to take a foot off the pedal. Few things guarantee error and difficulty more than haste. We need something sorted as soon as possible – but no sooner!

Advertisements

The Challenge for the DfE with Workload

Who’d be a politician, eh?

You get the blame for everything, and yet relatively little power to do much about it.

It seems that the response to the Workload Challenge has not been received with great joy… but then, was it ever going to be? The results of the survey speak for themselves. Over 40,000 teachers responded, and the two biggest drivers of workload according to those responses? Ofsted and School Leaders, neither of which are directly within the control of the department.

The two most often-mentioned tasks that added to workload? Excessive data and excessive marking. No prizes for guessing who the main drivers of those excessive demands are.

One has to ask what people were really looking for from the DfE in response to these challenges.

cartoon_6

The reality is that the DfE had tasked itself with a mission of improving something that it really couldn’t control. It’s true, they’ve worked with Ofsted to take some small steps to try to alleviate that problem, continuing in an existing vein, but they have gone to some lengths over recent years to stop micro-managing school leaders.

The sad truth for teachers is that the vast majority of the excessive workload we suffer is caused by school leaders, trying to dance to the tune of an inadequate and inconsistent inspectorate.

So what of the proposals from the response? When taken in the context of what the department can actually control, it’s a mixed bag.

Some things to welcome:

  • Minimum lead-in times for major changes – one of the worst things about the current government’s approach has been its endless rush to change things, without any thought about the impact on schools, or any preparation itself for implementation.
  • No changes to examination subjects during a course cycle – it’s frankly a disgrace that this would even ever be in doubt, but certainly a relief to see that it’s here.
  • Commitment to improved Quality Assurance for Ofsted reports – a major issue, although QA for the whole Ofsted process is probably just as necessary
  • Focus on coaching of headteachers – too many of the workload demands in our schools are caused by ineffective school leaders trying to cover all bases with paperwork. We need to support good headteachers more effectively, and challenge weaker ones.
  • Work with the EEF to link research to more practical advice about implementation – feedback has become king on the back of research, but is too often interpreted as “more marking”. We need more direction for schools – especially those with weak leaders – on what these things look like.
  • DfE taking a closer look at data collection and analysis challenges – it’s an on-going challenge, again driven by Ofsted and weak leaders, so evidence of effective practice that isn’t unmanageable should be welcomed.

Some missed opportunities:

  • The minimum lead-in times have too many caveats, and significantly leave out the key elements of assessment; assessment is such a driver in schools now that the lead-in time should include it and all parts of policy. I’d also have liked to have seen a longer period, and much higher expectations that no changes are made that affect students within a key stage. No change is that important educationally; it’s only politics that forces the rush.
  • It’s a shame that there isn’t a role for Ofsted in providing good practice evidence of manageability of workload; the department can say all it likes about workload, but until the inspectorate is singing from the same hymn sheet, schools will still feel compelled to produce more and more paperwork to satiate any of the random selection of inspectors they might be faced with.

So, no, the department won’t get praise from every quarter – they never would have. But we’ve made some small steps of progress, and it seems that there is a genuine understanding that this is a serious issue that needs tackling. As professionals we have a duty to begin to get our own house in order, but certainly as a member of the Teacher Reference Group I’ll also be pushing for workload to remain on the department’s agenda, and to see more changes in the future.

At least we’re heading in the right direction.

 

Fooling the Reception baseline

My thoughts on a brief discussion tonight about whether or not Reception teachers will cheat on the baseline tests to depress results.

Only it’s not as straightforward as that, is it?

Doubtless somewhere somebody will cheat: it’s the nature of large populations, sadly. But cheating is relatively rare at all levels of examination. What is more significant in this case is the careful use of the rules.

Firstly, there’s the (somewhat crazy) act of selecting a test. Schools are entitled to choose the test that they think is best for them. We’ve seen endless articles and politicians complaining about the ‘race to the bottom’ from GCSE exam boards; why would reception tests be any different? And why would school leaders do anything other than try to select the test that would cause their pupils to achieve the worst results? After all, this is the baseline against which future outcomes will be measured.

And once the test is selected, when do you carry it out? After the first few weeks, once children are settled in the environment and familiar with the staff and able to show what they can do? Or might a headteacher be tempted to push for an earlier assessment, before those possible frustrating factors have been removed? Maybe the first day they’re through the door?

And in what circumstances? We sit KS2 tests in silent rooms, but might distractions be an advantage in this case? Maybe it’s better that the child can’t fully hear the tablet because of the other noises in the room?

Of course, the timings are bound to be flexible, so perhaps a little rush would help? After all, its no skin off our noses if they don’t quite get the chance to show what they can do.

And the benefit of the doubt? Not sure if they got that phoneme right through knowledge or luck? Better to err on the side of caution and say not for now. After all, if they did, that’s a little bit of progress made already on the data spreadsheet.

But why would Reception teachers do any of that? Sure, the stakes are high for schools now – not least after the declaration of “all out war” from the current government, but as Sam Freedman points out:

But what Sam has missed – and what the DfE seems so often to misunderstand – is that the vast majority of teachers is not so mercenary as to disregard the wellbeing of their colleagues; most primary school teachers consider themselves to be part of a collaborative community where we’re all working towards educating all of the children in our care, not in competition with one another; the average teacher in a Reception class is much more concerned about the relationships with their colleagues and workplace than with the latest wheeze from the department; and the typical primary school teacher doesn’t see the data set from children in another year group as ‘somebody else’s problem’. So yes, 7 years is a long time, but for a teacher in a Reception class, what other factors are there to consider?

Lower baseline results might help them to show more progress over the Reception year – a vital high stakes factor now, given that we have had performance-related pay forced upon us.

Lower baseline results might also help to demonstrate the challenges we’re working with to outside agencies, inspectors and others who scrutinise our work.

Lower baseline results might help to satisfy the headteacher who fears for his job in the current high stakes environment that is worsened by outbursts like those recently from the Tory party.

Lower baseline results will at some point help the school to demonstrate the progress that is made by all of the staff working together in the building – and for many, 7 years is a perfectly reasonable time to imagine still being in the same building.

And who benefits from higher scores? Certainly not the child, nor the teacher herself, nor her immediate colleagues, or the families with whom we work.

All of the factors that lead to coaching and boosting and targeting in Year 6 (about which secondary schools complain endlessly, and yet schools feel compelled to do), are going to have similar (if inverse) effects in Reception. So indeed, it’s true that very few Reception teachers would feel pushed to cheat on the tests. But if it were as simple as not cheating, then we wouldn’t have half the trouble we already have with results transferred between primary and secondary schools.

The only difference is that this time we’re mixing up 4-year-olds in our data games.


With apologies for any typos – it’s late, and I never meant  to get caught up in this debate anyway!