Originally published November 14, 2023 by Route Fifty
President Joe Biden’s recent executive order on artificial intelligence specifically cited government benefits programs as an area where the technology could be helpful, although experts warned that proper implementation will be crucial to mitigate any risks.
The order directs the Secretary of Agriculture to issue guidance to public benefits administrators in states, localities, tribes and territories “on the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs.”
The emerging technology is seen by many as a way to help speed up an application and decision process that can take weeks or months. State officials in Georgia, for instance, are seeking federal approval to use AI to clear up persistent backlogs under the Supplemental Nutrition Assistance Program, or SNAP.
Trooper Sanders, CEO of the nonprofit Benefits Data Trust, which advocates for streamlined access to government assistance, said while AI could help unwind some of the “administrative muck” present, leaders must not see it as a silver bullet.
“What you don't want to do is just run headlong into sprinkling some AI fairy dust, and [thinking] the magic is going to happen,” Sanders said. “Artificial intelligence can either fundamentally improve the customer experience, improve the efficiency of administering these programs, improve the dignity around these programs, or fundamentally at scale, set those key things back.”
Biden’s executive order stipulates that any programs using automated or algorithmic systems must ensure maximized program access for recipients; appeals of benefits determinations or customer support be carried out by a human being; auditing be available to show how systems arrived at a certain decision and leave open the option of remediation; and algorithmic systems used in benefits programs achieve equitable outcomes.
In separate guidance to the heads of federal agencies on the use of AI in government, the Office of Management and Budget warned that the technology is assumed to negatively impact people’s rights if it makes decisions “regarding access to, eligibility for, or revocation of government benefits or services.”
Alexandra Reeve Givens, CEO of the Center for Democracy and Technology, said on a recent webinar that a series of lawsuits over the last decade informed that guidance. Determinations and fraud detection had been automated in those cases “without good human recourse or alternatives, and led to people being stripped of their benefits.”
Givens said as a result, there is virtually no chance of AI being the sole way to determine an applicant’s eligibility for government benefits.
“We are not going to replace human autonomy or human agency in any way,” echoed Abhi Nemani, senior vice president of product strategy at software company Euna Solutions. “We're going to augment it.”
That augmentation, which other government leaders said should be the priority when embracing AI, could help with the fraudulent claim detection that spiraled out of control during the COVID-19 pandemic and has continued since. AI could alert applicants to missing documents, help government employees scan documents during claims adjudication, and, with responsible data sharing, can show if someone is eligible for additional programs.
Sanders said bringing technology to benefits programs is “nothing new,” even though progress may have felt slow at times. That has included moving SNAP away from physical food stamps to a plastic card that is automatically reloaded, and moving towards allowing electronic signatures on documents rather than requiring “wet” signatures on paperwork that must be mailed.
That effort appears to have accelerated in recent years, Sanders said, with Biden’s 2021 executive order on transforming customer experience and service delivery, in a bid to restore trust in government. That combined with the recent AI executive action could be seen as a “wave cresting” as governments assess how to make programs work, treat people with more dignity, and reduce the administrative burden.
With states and localities responsible for administering benefits programs like SNAP, the idea of embracing AI may be a frightening one, especially if it comes in the form of an unfunded federal mandate.
But Sanders said that instead the federal government should set the rules of the road for states, both in terms of the AI tools they can use and how they are allowed to administer benefits. States in particular do not want to fall afoul of federal rules, so while it is incumbent on them to be a “smart customer,” Sanders said the federal government must provide “clarity.”
Nemani said the federal government could even take a “more directive role” with setting AI standards, as it does with the Federal Risk and Authorization Management Program, which provides a standard approach to the cybersecurity and safety of cloud products. Having the federal government’s clout would “dramatically pick up adoption,” he said.
Meanwhile, it will be incumbent on governments at all levels to manage the risks associated with AI, especially potential biases that may discriminate against certain groups. A Royal Commission report in Australia—the highest form of public inquiry in the country—excoriated its government for an automated debt assessment and recovery scheme known as Robodebt that led to false or incorrect debt notices and was the subject of almost universal criticism.
The report described Robodebt as a “crude and cruel mechanism” that “made many people feel like criminals.”
Nemani emphasized that it will be a tough balance for governments as they embrace technology while keeping humans at the center of the decision-making process. And while benefits programs may be a major source of frustration among residents, they are designed that way to avoid criminality.
“Government is designed to be human,” Nemani said. “The very notion is that we want people making these decisions. It has friction and needs to have friction to work.”