Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold

A case study into how to automate false accusations of fraud for more than 34,000 unemployed people

Illustration of computer pointer fingers accusing a group, with most considered guilty.
Illustration: iStockphoto/IEEE Spectrum
Advertisement

Perhaps next month, those 34,000 plus individuals wrongfully accused of unemployment fraud in Michigan from October 2013 to September 2015 will finally hear that they will receive some well-deserved remuneration for the harsh treatment meted out by Michigan Integrated Data Automated System (MiDAS). Michigan legislators have promised to seek at least $20 million in compensation for those falsely accused.

This is miserly, given how many people experienced punishing personal trauma, hired lawyers to defend themselves, saw their credit and reputations ruined, filed for bankruptcy, had their houses foreclosed or were made homeless. A sum closer to $100 million, as some are advocating, is probably warranted.

The fiasco is all too familiar: a government agency wants to replace a legacy IT system to gain cost and operational efficiencies, but alas, the effort goes horribly wrong because of gross risk mismanagement.

This time, it was the Michigan Unemployment Insurance Agency (UIA) which wanted to replace a 25-year-old mainframe system. The objectives of the new system were three-fold and reasonable. First, ensure that unemployment checks were only going to people who deserved them. Second, increase UIA’s efficiency and responsiveness to unemployment claims. And third, through those efficiency gains, reduce UIA’s operational costs by eliminating more than 400 workers, or about one-third of the agency’s staff. After spending $47 million and two years on the effort, the UIA launched MiDAS, and soon proclaimed it a huge success [pdf], coming in under budget and on-time, and discovering previously missed fraudulent unemployment filings.

Finding Fake Fraud

Soon after MiDAS was put into operation, the number of persons suspected of unemployment fraud grew five-fold in comparison to the average number found using the old system [pdf]. The newfound fraud and the fines imposed generated huge amounts of money for the UIA, increasing its coffers from around $3 million to more than $69 million in a little more than a year.

The cash windfall was due in part to the harsh penalties imposed on those accused, such as the levy of a 400 percent penalty on the claimed amount of fraud [pdf], the highest in the nation.

Further, once a claim was substantiated, the state could immediately go after a person’s wages and federal and state income tax refunds, and make a criminal referral if payments weren’t forthcoming.

While the UIA was patting itself on the back for a job well-done, unemployment lawyers and advocates noticed a huge spike in appeals by those accused of fraud. In instance after instance, the accusations of fraud were subsequently thrown out on appeal. Digging deeper, the lawyers and advocates discovered [pdf] that a large number of fraud accusations were being generated algorithmically by MiDAS, with no human intervention or review of the accusation possible, as required with the legacy system.

In addition, the MiDAS-generated notices of fraud that claimants had to respond to were designed in such a way as to almost ensure someone inadvertently would admit to fraud. MiDAS also accused some people of fraud even though they had never received any unemployment. Furthermore, MiDAS was apparently basing some of its findings on missing or corrupt data. In effect, MiDAS was built upon the assumption that anyone claiming unemployment insurance was trying to defraud the UIA, and it was up to claimants to prove otherwise.

All the failings of MiDAS are too numerous to repeat here; I suggest you read the many excellent published stories such as these (here and here) from the Detroit MetroTimes and here from the Center of Michigan for more details and links to other articles which will leave you shaking your head in disbelief at the callousness shown by the UIA.

Even though 92 percent of fraud claims were being overturned on appeals in administrative court, the UIA stubbornly defended MiDAS (and all the “surplus money” it was generating to cover state spending) against internal warnings that something was wrong with how MiDAS was determining fraud. However, the public and political outcry finally forced the UIA to admit that perhaps there was indeed a significant problem with MiDAS, especially its “robo-adjudication” process and the lack of human review. The UIA decided to cease using MiDAS for purely automated fraud assessment in September 2015, after pressure from the federal government and the filing of a federal lawsuit against the agency that same month.

The federal lawsuit against the state concluded in January 2017 with the UIA finally apologizing for the false claims of unemployment fraud. A thorough review found that from October 2013 to September 2015, MiDAS adjudicated—by algorithm alone—40,195 cases of fraud, with 85 percent of those resulting in incorrect fraud determinations. Another 22,589 cases that had some level of human interaction involved in a fraud determination found a 44 percent false fraud claim rate, which was an “improvement,” but still an incredibly poor result. Interestingly, but not surprisingly, the UIA has stubbornly refused to explain exactly why MiDAS failed so spectacularly, or why it ignored all the early warning signs that something was radically amiss.

While the UIA says it sympathizes with those it falsely accused of fraud, and has supposedly returned all the fines it had collected, the UIA has also strenuously fought against the class-action lawsuit [pdf] brought against it for the personal and financial damages those phony accusations created. The UIA strongly lauded a state appellate court ruling in July 2017 dismissing the lawsuit because those wrongly accused missed the deadline for making their compensation claims.

Given that the UIA stonewalled all attempts to discover the depth, breadth, and reasons behind the fraudulent fraud accusations, the ruling may be legally correct, but it is morally ludicrous. The ruling, which is being appealed to Michigan’s Supreme Court, so shamed the state’s legislators and governor that they agreed to changes to the state’s unemployment law and, at least, in principle, to the creation of a MiDAS victim compensation fund. We’ll see next month whether one actually is created.

 Michigan is Not Alone

The MiDAS fiasco is not the only case where robo-adjudication has been used to seek potential benefits fraud. It is alive and well in Australia, where the government’s Centrelink program rolled out a similar approach in 2016 with similar results. Tens of thousands of benefit recipients have received letters from Centrelink stating that they have to prove that they haven’t applied for benefits they didn’t deserve, with more than 20 percent receiving the notices in error or with debt amounts significantly in excess of what they actually owed. The Australian government has insisted from the start that the automated system Centrelink is working as intended, which according to at least one report, works poorly by design as a way to cut operational costs, if not generate money it isn’t legally owed. When a parliamentary group recommended that the robo-adjudication process be halted, the government refused to hear of it.

In a thoughtful paper by California Supreme Court Justice Mariano-Florentino Cuéllar called, “Cyberdelegation and the Administrative State,” he points out that a real problem with bureaucratic decisions made purely by algorithm is the hesitancy of the human overseers to question the results generated by the algorithm. Justice Cuéllar cites the case of the U.S. Veteran’s Administration’s implementation of an automated disability rating system to reduce paperwork and personnel costs and increase productivity that significantly overestimated the disability benefits veterans should have received in comparison to what a human rater would have approved. In fact, in 1.4 million algorithmically-made rating assessments, only 2 percent were later overridden. The same hesitancy to see anything wrong with automated decisions occurred with both MiDAS and Centrelink.  

As algorithms take on even more decisions in the criminal justice system, in corporate and government hiring, in approving credit and the like, it is imperative that those affected can understand and challenge how these decisions are being made. Hopefully, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems will help ensure that the risks of automated decision making systems are not glossed over in the quest for their benefits, which potentially can be immense. I don’t think any of us would want to end up in the same type of nightmare robo-adjudication process as those in the MiDAS situation sadly did. 

The Computing Technology Newsletter

Biweekly newsletter about advances in hardware, software and systems.

About the Risk Factor blog

IEEE Spectrum’s risk analysis blog, featuring daily news, updates and analysis on computing and IT projects, software and systems failures, successes and innovations, security threats, and more.