Why doesn’t the DWP collect data on the accuracy of decision making?

The Work Capability Assessment (WCA) is the test through which the Department of Work & Pensions (DWP) determines entitlement to Employment & Support Allowance (ESA). It was introduced in 2008 and has been the source of considerable controversy since.

DWP outsourced the expertise it thought it needed to perform WCAs from a private company, Atos Healthcare, who in turn have recruited large numbers of healthcare professions (HCPs) – a combination of doctors, nurses and physiotherapists.

Although Atos HCPs perform WCAs and make a fit-for-work (FFW) recommendation to DWP, its own team of Decision Makers (DMs) make the final ESA decision. There is an appeal procedure that ends up with the Tribunals Service (TS).

40% of decisions reversed on appeal

The controversy exists at every level, but the most contentious aspect of the process has been the number of FFW decisions that have subsequently been reversed by the TS – generally accepted as around 40%).

The number of errors is bad enough in itself, but the issue has been compounded by the fact that appeals can take up to 10 months to be heard.

During this period, ESA is still paid, but at potentially a much reduced rate. If the decision is reversed, any money owed by DWP is backdated; if it is not DWP does not demand a rebate.

The huge number of appeals has inundated the TS, who are spending c£60m p.a. to address. All in all a very unsatisfactory situation largely brought about by the high first-time error rate.

One would therefore expect the right-first-time rate to be the definitive key performance indicator (KPI), through which the ongoing attempts to improve the process would be indisputably measured.

It is not a difficult calculation, but does involve a lengthy time lapse due to the long TS queue – another good reason for shortening it, but not a good enough reason for not closely monitoring this KPI.

However, in a wealth of information published and all sorts of claims of success by DWP & Atos, it is conspicuous by its absence. A Freedom of Information request has confirmed that DWP has not measured it, does not measure it and has no plans to measure it. One has to wonder why. A cynic might suggest they don’t really want to know.

Can you add anything to this? Please get in touch through the comments or email welfare@helpmeinvestigate.com

3 thoughts on “Why doesn’t the DWP collect data on the accuracy of decision making?”

  1. Your problem is what is it exactly that you want them to measure in order to provide an ‘accuracy’ figure for decision making?

    At the moment the only figure that gets near is the appeals success figures which TS do publish. However, that has problems as an overall success figure because the sample is essentially self selecting i.e. it includes only people who choose to appeal. The numbers of appeals has indeed rocketed but it’s still a relatively small percentage of people who do pursue an appeal.

    Another possibility would be to randomly select a sample and examine the quality of decision making, similar to what I understand they do to produce the fraud figures. The biggest problem with this approach is that, what you effectively end up with is another person’s opinion on the outcomes of those decisions, which may be no better or worse than the original decision makers’. That’s not really a robust basis for deciding on success rates for decision making.

    Also, what do you then do with those cases where the decisions are found to be incorrect? Do you stop the person’s benefits if the decision is later decided to be incorrect? Believe me there are all sorts of legal issues with that. And if you do take that approach, shouldn’t you also restore benefits to those whose refusal decisions are later found to be unjustified?

    As you can see it’s a bit of a practical minefield and that’s probably why they don’t do it.

  2. Firstly, I should perhaps declare my credentials – 3 WCAs, all wrong first time and corrected on appeal. All that changed was that each time, DWP realised its error quicker – not really the sort of improvement everyone is looking for. Each time, I have painstakingly retraced the ‘audit trail’ to understand why such a glaring error was made, not once but on three consecutive occasions. The process is so poor that it didn’t even say ahead of WCA#3, “Hey, we’ve screwed this guy up twice now, so let’s make sure we get it right this time”, let alone making more of an attempt to achieve the same aim ahead of WCA#2. Statistically, this must mean something against a backdrop of continued improvement as regularly announced by Messrs Harrington and Grayling . Who I wonder is conning whom?

    Everyone (including the illustrious Professor Harrington and Chris Grayling) talks about the importance of getting it right first time – what is the point if it is a parameter that cannot be measured? If they believe this is the case, they should come clean and say so and focus on another means of assessing success/failure. I’d perhaps have a different view (but not by much) if they had articulated the possible problems as here, but they have not.

    The only management cliché that has stuck with me over the years is “if you can’t measure it, you can’t manage it” and generally speaking, where there is a will there is a way. If this measure is impractical, what is – there has to be something that offers a measure of success or otherwise?

    A ‘right’ decision is simply one that is either made and accepted without appeal, or is made, appealed and upheld.

    I accept that statistics are thrown around fairly liberally to prove whatever point is under debate, rarely with an accurate definition of what they really represent. Sadly, the ensuing arguments detract from the real problem, so another good reason for having a meaningful, objective universally agreed KPI.

    The true error rate is a bit under 10% of all ESA decisions made, which is far too high given the potentially terminal consequences of an error. Also, they have not just been at the margins – the extreme errors are well documented, but never explained. Measurement, tracking and remedial action obviously becomes very much easier if appeals are resolved quickly. Any measure based on sampling is bound to fail in the face of endless disputes over sample accuracy and I for one never did quite grasp Students “T” test. Just on the subject of statistics, the is well-established concept called “discriminant analysis” which has standardised techniques for assessing accuracy based on both false ‘positives’ and false ‘negatives’ and the Coalition has enough statisticians at its disposal to use it.

    There is no objective reason why the Government cannot set a target, only a political one. It would expose the level of collateral damage (to people and costs) it regards as acceptable, which clearly it is too embarrassed to declare. Again, they should come clean and get this debate out of the way first.

    Personally, I believe the overall ‘model’ is wrong and a 10% error rate is probably the best the current arrangement will ever achieve, but that’s a topic for another day.

    Finally, the matter of what ESA payments should apply through all of this complicated and another discussion for another day. All I do know is that the problem is much easier to resolve with more right-first-time decisions and a short appeal cycle. If the numbers are small enough, you can afford to be a bit generous.

  3. Some more on this issue of performance measurement through a more recent FoI request:

    Dear Department for Work and Pensions,

    On Page 68, paragraph 273 in the publication “Social Justice: transforming lives”
    (http://dwp.gov.uk/docs/social-justice-transforming-lives.pdf), Iain Duncan-Smith states that one of the cornerstones of success will be to “agree clear parameters for success”. What such parameters were agreed and are in place to measure the success or otherwise of the WCA? Please also provide the data history for each parameter.

    There are no specific targets set for the WCA, other than for it to be as fair and accurate as possible.

    It appears Ministers can say whatever they like to impress, even lie outright.

Comments are closed.