Five myths about the old National Curriculum levels

So here we are, four weeks into the new National Curriculum and what’s everyone doing with assessment? In primary schools, it seems that the ostrich approach is the most popular. The temptation to stick with what we know is understandable, but I want to clear up some of these common myths about the old levelling system.

Myth 1: The government set out the assessment programme for schools

Plenty of teachers are concerned that the DfE is no longer going to tell schools how it should assess progress during the Key Stage. In fact, it never did. There was never any statutory requirement for schools to use levels, much less sub-levels, to track progress during the academic year. In fact, the only statutory requirement was to assess using whole levels at the end of Key Stage 2 (and admittedly later using 2c/2b/2a at KS1). Everything else schools did using optional tests and APP and the like was not legally required. Of course, Ofsted expected schools to be tracking progress, and levels worked as a way of doing it, but it would have been perfectly legal to create another system. So in fact, the legal situation hasn’t changed; all that has is the clarity that schools are now free to choose their own approaches to suit their own curriculums.

Myth 2. Parents understand them

I think this is a common misunderstanding of what it means to understand the levels. It’s probably true to say that parents had come to understand the progression of sub-levels (i.e. that 4c comes above 3a, but below 4b) and perhaps even the expected ranges (i.e. that a Y4 child should be around L3), but there were very few parents who had any idea about what that meant in terms of attainment. Even fewer could take anything from the information to support their child’s learning.

Myth 3. They aid transition

I can see how this one came about. In the absence of any other information, I’d be grateful to receive sub-levelled information if a new child joined my class. But receiving a child who had been graded as a 4b writer told me relatively little. It doesn’t explain whether their use of in-sentence punctuation is secure; it doesn’t explain if they can paragraph appropriately; it doesn’t give me any clue about the strength of their spelling. In essence, it just tells me that they’re broadly average in my Y6 class. Perhaps we could all save time by just passing on below/above or around average as indicators.

Equally, for the transfer between schools we know only too well how little agreement there was about levels. It’s beyond a myth to suggest that receiving schools took anything more than cursory note of levels provided at transfer.

Myth 4. They helped measure progress

It’s true that levels, and their evil sub-level counterparts, gave us a nice comfortable system for implying progress. However, we’ve already noted the discrepancies between key stages. And the suggestion that the levels provided some sort of smooth indication of equally-sized steps is laughable. Certainly on tests we were able to divide level thresholds by 3, but consider the relationships between them at KS2: in maths the difference between the 3a and 4c thresholds could be around 10 or 11 marks; in reading the same thresholds were often as little as 2 marks apart! How could these possibly demonstrate equal steps of progress? Even within Reading itself, the gap from 3a to 4c could be 2 marks, yet the gap from 4c to 4b would be 6!

Certainly the steps could give an indication of tracking towards outcomes at KS2, but that’s not the same as being a reliable measure of progress!

Myth 5. They can be adapted for the new curriculum

This is perhaps the most dangerous of the myths. Because of the widespread mis-information from the DfE that the new expectations would be broadly in-line with the current level 4b requirements, many schools have presumed that the levels could be retained and ‘tweaked’ to provide an adequate assessment system.

One look at the fractions requirements of the new Y6 curriculum makes the flaws in this argument clear. This section shows the new requirements, with an indication of the approximate level from the old curriculum, based on APP statements:

  • use common factors to simplify fractions; use common multiples to express fractions in the same denomination (L5/6)
  • compare and order fractions, including fractions >1 (L5)
  • add and subtract fractions with different denominators and mixed numbers, using the concept of equivalent fractions (L6)
  • multiply simple pairs of proper fractions, writing the answer in its simplest form (L7)
  • divide proper fractions by whole numbers (L7)
  • associate a fraction with division and calculate decimal fraction equivalents for a simple fraction (L6)
  • recall and use equivalences between simple fractions, decimals and percentages, including in different contexts. (Level 5/6)

The same approach could be applied to comparing the new grammar requirements, or looking at expectations in arithmetic strategies. The reality is, the new curriculum is not only substantially different in its organisation and content, but also in its expectations. And while it may be true that a similar number of children are expected to “pass” the new tests, as currently score enough points to achieve 4b or higher, it is clearly not the case that knowing the content that would currently attain 4b would be enough to “pass” the KS2 tests in 2016.

Tagged: , , , ,

3 thoughts on “Five myths about the old National Curriculum levels

  1. Ian Lynch 27 September 2014 at 5:11 pm Reply

    I have been asked several times for the levels and sub-levels associated with the National Curriculum for the KS3 Computing Baseline test. No such levels exist. Computing is new so there never were any levels and never will be any from central government. If you have a SLT still hung up on Levels, the easiest solution is to use the test to give them what they want. 🙂

  2. Jainelouise 28 September 2014 at 3:16 pm Reply

    A very compelling argument but we need to hear what the alternatives are.

  3. Ian Lynch 28 September 2014 at 3:40 pm Reply

    One alternative is to use statistics based on what the population can actually achieve rather than guesses about descriptions that are not consistently clear or precise. This is more scientifically rational because “expectations” are based on empirical data not some arbitrary government declaration and we will be able to see if progress is taking place and measure the uncertainty in the measurements. This is the rationale for the NAACE/TLM baseline testing project in Computing. Over 30,000 pupils have taken the same test with about 2,000 more each day. This is providing a big enough sample for Y7 – Y9 to set a baseline for KS3. Subsequent tests each 6 months will determine pupil progress, department progress and GCSE and equivalent performance. For formative assessment we are providing cloud based evidence management and progress tracking that supports formative assessment. It’s all free. We are starting the development of a similar provision for KS2 next, then maths and science. Whether we can do all that for free is a little uncertain but any cost will be marginal. Our aim is to allow teachers to exercise professional autonomy in deciding how to teach by minimising the time needed for collecting progress data and doing the analysis centrally so each school does not have to do it individually.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: