Partner Blog: Benefit take-up and administrative data: an example from the US


Researchers are using administrative data to develop and test practical solutions to social policy problems. In this guest post Mary-Alice Doyle provides an example from her work at J-PAL North America. This blog post builds on a previous post highlighting 8 international and homegrown examples of where data analysis is focused on addressing vulnerability.

Pennsylvania: Administrative data used to identify eligibility for SNAP and encourage take-up

In the US over 36 million people receive the Supplemental Nutrition Assistance Program (SNAP), more commonly known as food stamps. SNAP is a means-tested benefit available to low-income households, worth about $240 per household per month, on average. In 2019 a two-person household would be eligible for SNAP if they had a combined income below roughly $36,000 (or around £26,000).

Recent reports suggest this benefit could be lost for up to 700,000 recipients, because of new rules limiting SNAP assistance for a maximum of three months ‘Hundreds of thousands of food stamp recipients have a new reason to panic’. Incredibly, some 36m people in the US rely on SNAP, and it provides $60bn dollars in support.

But many households with incomes below the cut-off for eligibility forego this important source of purchasing power. In this article, Mary-Alice describes a US based project she was involved in in that asked if there was a way we can identify these households, let them know they are eligible, and help them to apply?

While working at J-PAL North America, a lab based at MIT, I helped manage a research project aimed at answering these questions. Like many other projects supported by J-PAL North America, we used administrative data and engaged community-facing partners to pursue policy-relevant research questions.

The project was a collaboration between a team of academic economists (Amy Finkelstein and Matthew Notwidigdo) and Benefits Data Trust (BDT), a non-profit based in Philadelphia which helps people navigate the benefits system.

Both BDT and the academic research team were interested in learning whether a low-touch, low-cost intervention could help more people claim their benefits. To test this, we first needed to find individuals that were eligible for SNAP but not claiming it.

We identified these individuals using data on enrollment in other income support programs. The idea is that if someone is eligible for another means-tested program, they are likely eligible for SNAP too. We worked with the state Department of Human Services to identify people who fit these criteria, so that BDT could contact them. They identified just over 31,000 individuals, focusing on people who were aged over 65 and living in Pennsylvania.

BDT sent two-thirds of these individuals a letter to tell them they were likely eligible for SNAP, with information about how to apply. The remaining one-third did not receive a letter. Individuals were randomly assigned to one of these groups – this allowed us to be confident that any differences in outcomes between the two groups would be because of the letters, and not because of other pre-existing differences between individuals.

To better understand the barriers to SNAP enrollment, the letter-receiving group was split in two: half received ‘information only’ – just the letter, and instructions to contact the Department of Human Services to apply. The other half received ‘information and assistance’ – the letter, and instructions to contact BDT’s dedicated call centre for help with the application.

Nine months after sending the letters, we worked with BDT and the state Department of Human Services to find out whether the letters ‘worked’ in getting more people enrolled in SNAP.

The Department of Human Services provided data to BDT on who of the original 31,000 individuals had signed up for SNAP and who had not. BDT then provided the research team with a version of the dataset. To ensure we were maintaining individuals’ privacy, BDT only shared the information that we needed to do the analysis, and not individuals’ personal identifiers like their name, address, or other details. Using this data, the research team was then able to estimate the impact of the letters on SNAP enrollment.

The letters worked. Of the group who did not receive the letters, 6% signed up for SNAP. Of those who received the ‘information only’ letters, 11% signed up. And of those who received the ‘information and assistance’ letters, 18% signed up. (More detail on these results is available in the academic paper.) Admittedly, even with the letters, most people didn’t enroll. But over 1,000 people started receiving additional income in the form of SNAP benefits as a result of this low-cost intervention.

Without access to administrative data, this research might not have been possible. We would have needed to survey households first to find out whether they were eligible for benefits, spending a lot of time in the process surveying households who turned out to be outside of our focus. Then after sending the letters to eligible households, we would have had to conduct another survey to find out whether the letters worked. The cost of this research would have been prohibitively high, and as a result, we probably wouldn’t have the answers.

As it was, the research wasn’t easy. I joined the project partway through and built off the hard work my colleagues had already done in getting the project off the ground. We needed to negotiate and navigate the process for sharing data between three organisations, investigate the source and meaning of each of the fields in the data, and establish robust data security procedures to ensure we protected individuals’ privacy every step of the way. But the real-world impact of this type of research, that we now know how to help thousands more low-income households get the support they are eligible for, is well worth the effort.

Republished from Policy in Practice. Read the original blog post.