The Shakedown State: Fraud Detection and Predicting the Past
July 26, 2022 |
Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0
In December 2016, an Australian whistleblower warned The Guardian that a new automated system being rolled out to assess and recover debts from hundreds of thousands of the country’s most vulnerable families was fatally flawed.
The whistleblower worked for Centrelink, the government agency tasked with distributing a wide range of government payouts, including social insurance benefits such as old-age pensions, parenting allowances, and job seeker (unemployment) support. In 2015, the agency introduced what they called an “online compliance intervention” to enhance fraud prevention and debt recovery in the department of human services.
The system, colloquially known as RoboDebt, proved disastrous for both social insurance beneficiaries and the government. It searched historical data back to 2010, identified where overpayments may have occurred, and moved to aggressively collect the “debts.” Nearly a million people received automated notices that told them they were overpaid public benefits in the past and demanded that they prove they did not owe the government money or repay it. Australia’s human services minister threatened to jail those who failed to pay.
The system was based on an averaging algorithm that took yearly earnings data from the Australian Tax Office and divided it by 26 to estimate biweekly income. It interpreted discrepancies between self-reported earnings and those calculated by the averaging algorithm as overpayments. This shoddy methodology was a terrible way to estimate benefits eligibility for people with inconsistent wages. In effect, the online compliance intervention imputed debts that never existed.
From the United States, I followed the debacle and the organizing efforts against it by the Australian Unemployed Workers Union, the #NotMyDebt campaign, Victoria Legal Aid, and others. And I wondered: Is it happening here?
In October 2019, after nine months of interviews, research, and public records requests, I could show that it was. I reported in The Guardian that predatory policy changes and high-tech tools had turbo-charged government zombie debt collection in public service agencies across the country. The Illinois Department of Human Services, for example, was sending out an average of 23,000 notices a year claiming families had received too many food stamps or cash benefits in cases that stretched as far back as 1988. Iowa Workforce Development had sent out 20,000 notices of alleged overpayments of unemployment benefits in October and November 2018 alone.
Families were receiving overpayment notices for benefits they received a generation ago – debts that they often did not remember incurring and for which the appeals window had long closed. They were being told that if they didn’t pay up, the state would refer the debt to a private collection agency, or would send it to the federal Treasury Offset Program, which could withhold their tax refund, their earned income tax credit, their social security disability check, or their veteran’s retirement benefits.
The acceleration of government zombie debt collection was partly the result of legislative changes since 1996 that lifted statutes of limitations on government debt and expanded what kind of benefit overpayments can be collected. But it was also the result of new digital tools that allow states to go farther back into their records to search out inconsistencies and errors that might be interpreted as debt.
Automated decision-making is routinely touted by industry and public agencies as a way to prevent and deter fraud. Designers and proponents of digital fraud detection systems suggest that the “pay and chase” approach – paying out benefits and then uncovering abuse through investigation and frequent recertification – is no longer sustainable. They suggest instead a “predict and prevent” approach that uses data science to stop improper payments before they happen.
In other words, the cheerleaders for automated fraud-detection systems promise they’ll prevent abuse of government programs. In reality, these sophisticated analytic tools are often deployed to try to “predict” the past. And they regularly get even that wrong.
The Australian RoboDebt scheme was criticized by the media and economic justice groups, became the subject of several government hearings, and eventually faced a federal court challenge and a class action lawsuit that found that many of the debts were incorrect and calculated unlawfully. In May 2020, the government announced it would scrap the program, promised to repay 381,000 wrongly issued debts totaling the equivalent of 525.5 million U.S. dollars, and cancelled remaining debts worth an additional $1.25 billion.
Closer to home, in December 2020, California’s Employment Development Department (EDD), hired a private company, Pondera Solutions, to review 10 million claims paid since the pandemic began. Pondera’s software, dubbed FraudCaster, flagged 1.1 million claims as “suspicious” and the agency stopped payment on them all, without notice, possibly breaching the Social Security Act.
In July 2021, Governor Newsom formed a strike team to investigate the growing backlog of 1.7 million unemployment claims. Their investigation found that at least 600,000 of the claims flagged by Pondera as fraud were in fact legitimate. Moreover, the use of this largely-invisible system created a culture of fear and threatened long-held rights to due process. Summing up, the strike team concluded, “While it is certainly EDD’s job to fight fraud, it is also EDD’s job not to allow the fight against fraud to interfere with the delivery of benefits to legitimate claimants.”
In the wake of organized criminal exploitation of vulnerabilities in Covid-era programs such as pandemic unemployment assistance and the Small Business Administration’s paycheck protection program loans, panic about improper payments in public benefits has risen to a fever pitch. Critics and supporters of pandemic-era social spending from both sides of the aisle seem to agree on one thing: technology can save us.
But as RoboDebt in Australia, government zombie debt in the United States, and Pondera’s unemployment payment suspension in California suggest, more automation and more data do not necessarily lead to fewer improper payments. And automation errors can be costly: both to the government and to the people social insurance programs are supposed to support.
The extra barriers automated fraud detection create for workers and families in crisis can be devastating. Succumbing to moral panic about Covid-era fraud can strengthen racist and classist narratives about the inherent criminality of the poor. And digital systems that can reach deeply into past data without rigorous safeguards for accuracy, transparency, and accountability can create a kind of legalized extortion – a shakedown state. In a time when the legitimacy of government is under greater scrutiny than ever, algorithmic pocket-picking is an investment our democracy can’t afford.
Support Our Work
EPIC's work is funded by the support of individuals like you, who allow us to continue to protect privacy, open government, and democratic values in the information age.Donate