Tracking ≠ Assessment

There’s been a lot of talk about assessment since the announcement back goodness-knows-when about the scrapping of National Curriculum levels. However, to my mind, a vast amount of it – almost certainly the majority – has been not so much about assessment as about tracking.

Let me be clear at the outset that I think you need both. However, one of my major complaints about the current system of National Curriculum levels has been that its use as a tracking tool has long since superseded its purpose as an assessment tool.

It’s perhaps useful to re-visit some simple definitions of the two terms – simple enough to be taken from the Google definitions:

To assess: to evaluate or estimate the nature, ability, or quality of (someone/something).

To track: to follow the trail or movements of (someone/something).

Inevitably, the way that Ofsted works has meant that schools have been forced to use their assessments in the form of National Curriculum levels to demonstrate that they are tracking progress towards the end-of-key-stage expectations. However, in doing so we have all but divorced the act of assessment from the processes of teaching and learning.

As we approach the new curriculum, and new expectations of assessment, I want to argue again for the need to separate these two processes. It seems likely that the end-of-primary and end-of-secondary assessment processes are likely to change, and quite probably will become less criterion-referenced. As such, the easy choice for assessment and tracking would be to use scaled versions of the end of KS2/KS4 tests throughout all year groups, and judge progress accordingly. In theory, a student in Year 7 could be scoring as little as 1% on a test series designed for Y11. It would then be easy to judge progress each year and to track it accordingly. It would be easy to collect data on national averages, to make comparisons between schools, to feed data into Ofsted inspections and to share results with parents which they could reasonably understand.

It would do nothing to support teachers in their assessment of what has been learned and what must be taught.

The problem with the drive towards the tracking of progress is that with each step towards clear data for these broad purposes, we lose valuable information at the small scale.

For assessment to be useful and meaningful, it needs to tell students, teachers and sometimes parents, what it is that a single child can or cannot do. Levels were never very good for this, since they were designed to be broad. Consequently, sub-levels and APP were created to try to fill the void – essentially becoming mini-markers on the tracking scale. But still, doing too little to guide teaching and learning.

For assessment to work, it needs to be directly linked to the taught curriculum. Since even with the new National Curriculum we don’t yet have a prescribed teaching order, a national assessment framework could only provide a broad-strokes judgement suitable for tracking. In schools, we need assessment processes which are directly linked to the taught curriculum that allow teachers to judge how well their students have learned what has been taught. Current national tests (optional and statutory) do not serve this purpose. A child in Y5 might not have covered all of the KS2 curriculum content which could come up on a test, but a test level does not differentiate between that which has not be taught, and that which has been taught but not yet grasped.

To take a classroom example, a child can quite feasibly achieve a Level 4 on national curriculum tests, or even through APP assessment, without knowing their tables up to 10×10, despite this being a requirement of both the APP criteria and the National Curriculum attainment target. What is more, that same child can continue to make progress to Level 5 and beyond. Indeed, some students who manage well in many areas of mathematics can continue to appear to be making progress on tracking scales, despite never confidently knowing these key things in the curriculum.

In the absence of assessment, it is perfectly possible for a student to never have this key need identified.

Despite all that we know about the importance of some key aspects of subjects – not just mathematics – we continue to build our progress-measuring systems on criteria relating to tracking, rather than assessment.

I note this particularly because this evening, in a twitter discussion, Sam Freedman indicated his desire to have a system which would allow him to compare his child’s progress/attainment with others. The problem here is that systems which allow that, inevitably lack the nuance that meaningful assessment entails. A child might well be achieving level 5, or within the expected range for his age, or 101 on a scaled score- but none of that information gives away the truth about whether he/she can quickly recall 6 x 7.

That’s not to say that they cannot have their place, but rather that if we are to be even slightly serious about employing the most effective aspects of feedback and assessment for learning which have been proven to be so beneficial, then tracking test cannot be expected to underpin this.

If anything positive is to come out of the government’s announcement of the scrapping of levels, I hope it is that we begin to take the original advice of the curriculum expert panel more seriously and address assessment as a tool which identifies what each student can, or cannot yet, do, rather than how close they are to achieving a particularly grade in a number of years time!

“We believe that it is vital for all assessment, up to the point of public examinations, to be focused on which specific elements of the curriculum an individual has deeply understood and which they have not.”

p50, The Framework for the National Curriculum
A report by the Expert Panel for the National Curriculum review
December 2011

Tagged: , , ,

26 thoughts on “Tracking ≠ Assessment

  1. primaryblogger1 19 January 2014 at 9:18 pm Reply

    Reblogged this on Primary Blogging.

  2. Ian Lynch 19 January 2014 at 9:52 pm Reply

    You might be interested in the assessment provocation paper I wrote for the Computing Experts Group for the forthcoming meeting at BETT2014. http://ow.ly/2ajycw It’s more targeted on secondary but the principles would also work for primary. In summary, rewrite the POS as statements of competence that can then be used for assessment for learning and tracking progress through the POS. Use annual tests across a representative sample of schools to estimate the relative position of the student in terms of percentile in the cohort. This then can translate to forecasting grades on an annual basis for eg GCSE. If 20% get A grades in the national exam and you re in the top 20% on the annual test you are on track for a grade A. For those that want it we have clod based evidence management, tracking and reporting for any one who wants to go into that detail but its not necessary to start out like that.

    • Michael Tidd 19 January 2014 at 10:13 pm Reply

      Thanks for your comments, Ian. I can certainly see some aspects of what you’re referring to. Although, of course, we won’t have any national assessment for ICT at primary level at all. That’s partly why I think schools need to lead on their own assessments, since then they are more likely to link to curriculum in both directions.

      • Ian Lynch 19 January 2014 at 11:08 pm Reply

        The only downside for secondary schools of doing their own thing with assessment is that the assessment testing will not inform progress towards the dreaded league table points in the national context unless they can fit their results into a statistically representative sample of schools. I only happen to be doing Computing first because it has a very different POS to ICT. I think science will be next. In fact the government could have used sampling rather than everyone doing SATS. Getting a random representative sample of 2000 children would give you accuracy to about 2%. So about 1 child from every 10 primary schools. That would tell them the national picture and whether overall schools were getting better or worse. Snag is individual schools would not know their contribution.

  3. misshorsfall 19 January 2014 at 10:03 pm Reply

    Great post, Michael – thank you! I think this has helped to clarify in my mind the whole discussion around assessment in the new curriculum that’s going on right now. I have to say that the difference between assessment and tracking hadn’t quite occured to me, but separating out these two things makes an awful lot of sense.

    • Michael Tidd 19 January 2014 at 10:16 pm Reply

      Glad it was useful, misshorsfall. I think we have the idea that assessment=tracking drilled into us because the current system makes us use the same tools for both.

  4. Chris Scarth 20 January 2014 at 11:32 am Reply

    Great post! We’ve just conducted a survey on assessment in the new NC and positively the most overwhelming desire from respondents was to see how to engage pupils with their own learning journeys. Developing your own curriculum, formative assessment & tracking is a lot for a primary school to fit in. September 2014 will be an interesting milestone and I wonder how many schools will be prepared.

    On a separate note (apologies for this method of comms it is the only channel I have for you!)- Are you coming to BETT this week? Rising Stars and us (Classroom Monitor) are putting on a panel discussion and Andrea has recommended you? It is 23rd Jan at 10:30AM around assessment in the new NC and I think you could be a great addition to the panel if you are keen? You should have my email from submission 🙂

  5. Chris Chivers (@ChrisChivers2) 20 January 2014 at 1:19 pm Reply

    Hi Michael,
    This is interesting from many points of view.
    I do think there are differences between tracking and testing, or AfL and AoL. Equally there’s a difference between PoS coverage and what a child knows.

    That Levels and APP became bureaucratic in some schools cannot be denied, but, having taught before 1987 and seen the difference that levels made to teacher expectation by analytical use and personalised application of levelness descriptors rather then level grades, I think the issue is more to do with inappropriate use rather than their existence.

    I think there is much embedded within levelness, and I stress that rather than levels, as descriptors of anticipated progress. Given to children as challenge, they become prompts for teacher-child conversation. Evidence in independent use becomes assessment data, in my view.

    There’s a significant difference between AfL and AoL, in that, while it can be seen that each is a part in a continuum, AoL can equate in some minds to just testing, which in itself has narrow interpretations. I have written several posts on the Inclusionmark website about AfL this past year, as I have explored the teacher mind-set through student teachers and their mentors, in theory one developing, one developed. It’s not always the case and there’s quite a lot of poor practice in the name of Assessment, (with a capital A).

    My own thinking is moving a stage further, to looking at what “good” might look like in the new world. We have clues in levelness, so annotated portfolios of exemplar material would seem to me to be a significant way to go this academic year. If 4b is to become the new 6-7 transfer expectation, the steps to achieve that point would need to be captured and described.

    I would worry if criteria disappear from discussion, as any coherent mark scheme should be based on criteria. If norm referencing is applied afterwards, that inevitably disadvantages learners at the pass threshold, unable to gain the extra mark.

    I’d also worry if 4ness depended on a view of perfection. To get a first class degree, there are clear criteria and a pass mark of 70%, not 100%. Best fit allows progress to continue, not a block. It’s the teacher role to note the areas for continued need and to address these appropriately.

    Children will not change to match a framework; they will retain their individuality and long may that be the case.
    Best wishes,
    Chris

    • Ian Lynch 20 January 2014 at 2:06 pm Reply

      The thing is that no-one is forced to do bad practice 🙂 Flexibility is a double edged sword. Using open source cloud based software it is possible to streamline the National Strategies and use AfL to underpin AoL. Provide evidence that is self-assessed/peer assessed and confirmed and fed back on by the teacher/assessor. If the coursework is assessed in this way it can contribute to both AfL and AoL. We have Special Schools doing this using the Pscale criteria which can be broken down as far as 20 subgrades to each level if the school wants to but they can use any steps they think appropriate in their circumstances. In KS4 optionally an exam can be used for summative grading. This is the basis of three new GCSE equivalents I have had agreed with the DfE to count in the new performance figures. In computing and design and technology, a level 1 qualification can be used on exactly the same basis for KS3. Well, no need to do the qualifications but it provides a better and more seamless progression route to Level 2. I’m expecting brighter kids to start level 3 in Y11. I reckon probably the top 10% of brightest students are wasting their time in Y11 since about 20% achieve the highest grades.

  6. teachingbattleground 20 January 2014 at 10:10 pm Reply

    Reblogged this on The Echo Chamber.

  7. […] 9: More from @michaelt1979, this time on assessment post- national curriculm levels: https://michaelt1979.wordpress.com/2014/01/19/tracking-%e2%89%a0-assessment/ […]

  8. Alison 26 January 2014 at 7:22 pm Reply

    If the new NC PoS can be partitioned into manageable statements to use in class for observation assessment, that will support AfL. I have tried with maths year 1&2. Using the EYFS emerging, expected, exceeding worries me as a child could be emerging in yr1 again in yr2 ad infinitum…. How do we show progress that is less/greater than expected? Any assessment system will need to link well to the data needed for accountability (tracking) if it is to be useful in the wider arena. I would suggest a point score system eg: if a child is currently achieving <20% of the yr 1 PoS they would be scored as 0.2 and fit the statement of 'just emerging'. A child achieving between 20-40% is emerging and would have a point score of 0.4 etc etc so the child achieving 80-100% would be at expected. (The steps in between can be called 'strongly emerging' and 'developing' or something similar.) I think this system links well to how often you might assess ( teacher assess, not constant testing!) eg: Oct, Dec, Feb, April, June, is able to show achievement at/above/below expected and is able to show progress at /above/below expected. And can be used for data analysis to identify groups, underachievement, relevant interventions, etc.
    So a child achieving at expected for year 3 would have a point score of 3.0 regardless of their age. If anyone thinks this has legs I'll carry on developing my thoughts further!!

    • Ian Lynch 26 January 2014 at 8:50 pm Reply

      For primary I should think building a bank of exemplary work of children from the POS across the attainment range would enable individual teachers to compare what their students are doing compared to others.The first purpose is to establish what is reasonable in terms of achievement in the context of the POS to consider it has been covered. It will take time to establish such data but it will take longer if we start tomorrow than if we start to-day. Thus you can measure progress through the POS at the required level. Beyond that (and this is probably easier in secondary) you can have short graduated tests that if taken by a statistically representative sample will tell you where in the population the individual currently is. If that position stays the same throughout their school life they are making similar progress to the rest of their cohort. The only difficulty is the need to get schools to cooperate to share exemplar work and to take common tests and feed results back to a pool. There would be no incentive to “cheat” the tests because doing artificially well one year is going to show decline the next since we are looking for progress in relation to the cohort not just an absolute measure beyond getting through the POS at a satisfactory level.

      My view is that it would be better to get the profession to take the initiative on this rather than wait for the DfE to do it. The reason being that it would be easy to turn the testing into an expensive rigidly controlled SAT style exercise when all that is needed is the normal progress testing most schools do in-house in any case. The progress measurements need to be used for informing teachers, individual students and parents, not for government to collect data about individual schools. There are plenty of other ways to gauge school performance and the effect of trying to do it with this data will very likely destroy its usefulness.

    • Michael Tidd 27 January 2014 at 7:16 pm Reply

      That’s certainly a potential model, Alison. As you rightly say, it is inevitable that schools and external agencies will end up wanting some form of number-crunching measure.
      My concern is that we risk pretending that all things are equal. For example, if you set key objectives for Y1 that require two different skills, is the child who has mastered one making the same progress as one who has mastered the other?
      The risk of numerical measures is always that they mask the important details of what children can or cannot do.
      All that aside, I certainly would be very interested in seeing development of your ideas – particularly when considering how easy it would be to apply them to English. The new curriculum is very explicit about expected outcomes in Maths and even Science to an extent, but the English curriculum is much vaguer. Schools are going to have to consider how they transfer what they know from APP, frameworks, etc. to make for measurable steps.

      • Alison Clarke 27 January 2014 at 7:42 pm Reply

        Thanks for your thoughts Michael. I don’t know that we can get passed the fact that if we are to measure progress there has to be a numerical assimilation. I understand your concern that not all statements are equal in value, that will always be the case when section in gout the curriculum. I will have a look at the English PoS and see how it develops.

  9. Alison Clarke 27 January 2014 at 7:43 pm Reply

    ‘Sectioning out’, rather than a reference to gout!

  10. […] Assessment is such an important part of the job that we do, but I do fear that too often we end up prioritising assessment for assessment’s sake, or for other reasons, rather than focussing on the key purpose of assessment: improving learning. My main bugbear in this respect is the dreaded needed for tracking. Tracking is really important, and vital for schools if they are to be able to ensure that students make good progress over longer periods of time. However, tracking is of very little use to me as a class teacher. I have no interest in whether Abigail is now a 3a or a 4c in Writing. What matters to me is exactly what she can and can’t do – and those numbers don’t tell me that! I’ve written more about getting this balance right here: Tracking ≠ Assessment. […]

  11. […] Tracking is not the same as assessment. However, tracking progress is essential in schools now, and so any assessment system must allow schools to identify how its students are progressing towards national expectations for the end of the key stage. This is particularly challenging at the moment in primary schools where the expectations of end-of-key-stage assessment have not yet been made clear. However, through some collation of recorded information in an assessment scheme, it ought to be possible to create some sort of data-based indication of progress across the cohort, as well as for specific groups. […]

  12. […] acknowledges that tracking is not the same as assessment, but contends that the ability to track progress is an essential component of a useful assessment […]

  13. […] would agree that too often assessment has driven the curriculum. I have written before about how Assessment is not Tracking. The current system of levels does conflate the two, and too often it’s the latter that has […]

  14. […] making assessment both meaningful and manageable, and avoiding the temptation to focus solely on tracking through tests. I’ve also spoken at events suggesting that we should draw on the model of Key […]

  15. Katherine 24 July 2014 at 11:28 am Reply

    This struck a cord with me. I did very well in maths, and was working on the highest workbook level in the class in primary school and then was in the top set in high school. When I got to year 9 the teacher suddenly told my parents that my mental maths was atrocious. I didn’t know my tables, couldn’t manipulate numbers swiftly in my head etc. I’d still managed to make good progress and was on track for an A at GCSE, but I certainly wasn’t meeting that level 4 criteria about knowing my tables! (My concerned parents sent me off to Kumon – now mental maths is one of my strongest points).

  16. shafattack 30 July 2014 at 7:34 pm Reply

    Tracking relates to formative and assessment to summative. If you record track all learning objectives your pupils areas of weakness are identified. This can then be used to produce a summative result for that subject matter alone. This van then be used to calculate an overall level after a series of subject matters are recorded. The firdt summative will not be as accurate as the last over a year however this once pointed out is useful to isolate areas of weakness to provide intervention targetted at specific learning objectives and shows progress to a degree.

  17. Ian Lynch 30 July 2014 at 8:27 pm Reply

    The first summative will be as accurate as the last at that point in time. It’s simply that less learning has taken place. The only levels that matter now are the levels in the national qualifications frameworks. Since these are fine tuned with grades, there is little need for more than that. You can track progress through the POS and summatively assess portions of it along the way and then once complete get a finale value for summative assessment that tells you whether the pupil is ready to start the next level. It doesn’t need to be at a set time, just when the teacher judges the pupil ready to move to the next qualifications level. Level 1 is broadly NC L4/5 and then Level 2 GCSE A*-C. Level 3 ia A level. Keep it simple and don’t divert teachers into unnecessarily complicated measurements that are basically not meaningful over short timescales or precise enough to justify the fine levels assigned to them.

  18. […] the commission is clear about the role of assessment. I have repeatedly revisited the mantra that Tracking is not the same as Assessment. The difference here is particularly significant given the role of headteachers on the commission. […]

  19. Formative assessment using SIMS 18 March 2015 at 9:28 am Reply

    […] Tidd, from his Tracking <> Assessment […]

Leave a comment