Comparing against targets (PT.2)

June 16, 2017
By: Emma Maltby, Data Consultant

With Results Day fast approaching we will soon be analysing our exams grades against targets at school, faculty, subject and student performance level.  As discussed in my earlier blog ‘Getting Results Day Ready’ preparation is key! With this in mind, it’s a good time to ensure that the number of targets tally with the number of results you are expecting.  Let’s take a look at some data to see how important this is.

Below we can see Attainment 8, Progress 8 and EBacc data taken from the SISRA Analytics Headlines Dashboard (you can also use Headlines Charts for more detailed analysis). I have compared the Y11 Spring data against school targets.  All of the school’s timetabled qualifications have been included in both datasets.

Imagine we have just found out that 6 students will take a GCSE in Polish.  The Head of MFL expects they will all achieve a grade B (this qualification remains unreformed for Summer 2018*).  I have added these grades to our Y11 Spring collection so it mirrors the number of entries we expect to see in the exams dataset.  As a result, the headline figures see a small but positive effect ?.  For some schools this could be the difference between a negative and positive P8 figure though!

Is there anything else we need to consider? Yes; for complete accuracy we should also ensure that any datasets we compare against have the same number of grades uploaded.  For the 6 exam grades I have entered, I should also enter 6 target grades to enable me to make accurate comparisons (all targets entered as B grades).

See how this has affected the Attainment 8, Progress 8 and EBacc target figures:

This is extremely important both at qualification and class level, particularly if the data forms part of a teacher’s performance management.  Here we are looking at the cumulative pass for the MFL faculty without the Polish grades:

Once the Polish grades have been added to the spring collection, due to both French and Spanish being reformed and Polish being unreformed, the data will appear in two tables, one per grade method.  A great way to analyse the data for a faculty in one table, regardless of different grade methods, is to use the OPTIONS functionality and select ‘All A8 Quals’ as the Grade Type.

Using the functionality mentioned above ensures you get maximum benefit of the summary rows.

When we factor the targets in too, see how the figures change again.

As Heads of Department and Class Teachers can be judged on performance (e.g. percentage of students achieving 9-7, 9-5 and 9-4 grades), ensuring all non-timetabled results and targets are added can have a positive effect on their data. This could be the difference between a pay increase or not!  The school also benefits from improved headlines too.

Always ensure the figures tally for all other datasets you compare against – whether its FFT estimates, performance management targets, CATs, MIDYIS, or YELLIS grades.  Also, when you are modelling targets or producing forecasts, a complete set of grades is essential for accuracy. 

A simple check can be made in Analytics to see if your datasets tally; simply compare the grades data for an assessment collection against your targets and check the ‘Total Grades’ column.  Your colleagues may just thank you for it!

Do also check that when comparing datasets they are using the same DfE Rules, Attainment 8 and Value Added Estimates – we shouldn’t try comparing apples with pears ?

Another common mistake is that qualifications are not correctly nominated as EBacc subjects.  Here we can see the effect of Computer Science on some of the key headlines when it is incorrectly set up, against when it is correctly set up as a special.

Another subject often incorrectly nominated is RE as a humanity. This has the opposite effect of Computer Science on the EBacc basket.

Hopefully by reading my earlier blog as well as this one, you should now be feeling more confident about results days and the accuracy of your data.  Why not read part 3 of this Results Day series of blogs; it looks at troubleshooting if there are discrepancies between your figures and the DfE’s.



This post was originally published on 16th June 2017 and was last updated on 31st May 2018.