Next, before finally digging into the data in detail and filtering out some numbers, it is key to identify how the data is constructed (before we deconstruct it).
If you look at it from the outside, predicting how many people will ‘non-complete’ near the beginning of the school year is a difficult task, but these predictions must be made in order to suggest and make payments for every year.
Universities send off a HESES (Higher Education Students Early Statistics Survey) in December, which quotes the number of student registrations for the year along with an estimation of non-completions, which has to be derived from the previous years non-completion figures. This is then negotiated and agreed upon.
Still with me? Ok.
HESA (Higher Education Statisics Agency) are then sent year-end figures which include the actual non-completion rates, which is then analysed for integrity and signed off in mid-October. This is then used to derive the figures for the following year after any gaps in the numbers are considered.
This prodecure, although extensive and rigorously checked can leave some gaps in the numbers, which is one of the first things we want to investigate when looking at the figures, which we have received from BCU and Birmingham University.
Another priority in picking apart the data is finding out to what extent different universities suffer from mid-year drop-outs, and what the difference would be if 100% of students stayed committed to their course.
Finally, what we can derive from that is whether universities that take on more students that are likely to non-complete (non red-brick, not trying to be biased here but the data should help prove or disprove that manner of assumption) are penalised more heavily than universities that tend to have a higher completion rate.
And, that’s for another blog post.