Artificial Intelligence for Social Impact: 5 Guideposts from Our CEO

TS AI

In November, our CEO Trooper Sanders gave the keynote address at a summit exploring data science for social impact, hosted by the Social Policy Institute at Washington University in St. Louis. The event brought together social sector colleagues with diverse roles and levels of experience to connect and learn new strategies for using data in ways that advance equitable outcomes and impact.  

Trooper currently serves on the National Artificial Intelligence Advisory Committee, a body established by the U.S. Congress to advise the President on national AI efforts. Below is an excerpt from his talk, focused on how to think about artificial intelligence, its role in the social impact sector, ways nonprofit organizations can harness it, and considerations for using AI in ways that are equitable. Watch the full talk here

###

At Benefits Data Trust, we are doing more with data beyond data sharing: we’re tapping into some of the tools of machine learning and artificial intelligence. For example, our texting chatbot uses natural language processing to help high school students and their families navigate the Free Application for Federal Student Aid (FAFSA) process.  

We’re in a phase of learning. Everyone is trying to figure out what data science and AI mean to their organizations. For all of us focused on social impact, we should keep the following ideas in mind as we confront these new technologies: 

  1. For organizations involved in social change, don’t skip the basics. BDT has been fortunate to have supporters, such as the Mastercard Center for Inclusive Growth, who have helped us scale our ability to tap the potential of data and get on the path to machine learning and AI. But a lot of the work initially was just getting our data house in order. The lists we receive from state agencies come in all different shapes and formats – they’re not always the cleanest data sets. It’s a process to get them into a format that allows us to actually start reaching out to people who are likely eligible for public benefits. So one of the things we invested in is automating that process, and it’s one of the things I’m most giddy about. Our ability to ingest that data automatically, and without it being such a manual process, frees up four people on my team who can focus on what else we want to do with data to improve our impact. It also means we can help more people.  

  2. There is a robust conversation that needs to be had on both the good and the ill of AI, machine learning and other advanced technologies and techniques – but don’t lose the enthusiasm. There is a way to see the positive potential of AI while being aware of the pitfalls and tough questions. We need to ask questions like “It could do this, as long as…” or “If x and y were true, then we could have the impact that AI promises.”

    That’s why having an interdisciplinary group of colleagues is so important. Because we all bring different things to the table. Data architects, engineers, and scientists approach the world in a particular way, but we need the social workers and lawyers to be looking at the problem and stress testing it in different ways. A room that is half data scientists and half social workers and sociologists is so important.

  3. For all the good the social impact sector is doing, we must remember that we’re not entitled to anyone’s trust. Sometimes in the social sector, the enthusiasm, passion, and urgency to do good can, in some cases, lead to ethical shortcuts. Not from a desire to cause ethical harm, but by not adhering to the same level of scrutiny and rigor that other sectors may be required to. We need to make sure that the many questions related to bias and AI, for example, are part of the active conversation internally at our organizations. If we’re working to incorporate AI into our work, we need to ask the question: For all the good we’re doing, what harm could we also do? We need to be aware that just because we’re the social sector, it does not make us immune from doing harm.

  4. A few words on “the business” of data: Data talent is expensive – they have a lot of options out there... When it comes to advanced techniques, like large language models, the truth is only a small number of entities – mostly in the private sector – have the horsepower to make the most of it. So the social sector really has to think carefully about data and AI and what it takes to make things happen – that’s everything from personnel to data sets to analytical techniques, etc. Often that means you’ll have to make choices with your budget, and what you can and can’t do...  

    It also means thinking about how to work with partners to build out the capabilities that are robust and resilient enough. We can still be enthusiastic about AI while not buying into the hype. (We’re not here to be Silicon Valley!) We’re in it to help people as effectively as possible by making smart choices. 

  5. Finally, it’s important to take a look at the broader world of AI. I’m really excited about moving from natural language processing to the extraordinary capabilities that large language models offer, such as using massive data sets of written materials to translate and predict and generate human speech and voice, without training it like we had to do with our Wyatt ® chatbot. There are an enormous number of ways that can be applied. But if you’re working on this technique and relying on the written word available, the books online, the chatter online everyday, we know the questions we have to ask. Who was included and excluded in publishing? Which voices dominate and which don’t? If we really want to train these models to be responsive, we need to think about which traditions are more oral than written. If we really want to make sure we’re addressing equity and social justice matters, we can’t just rely on the written word that happens to have been captured. Many folks come from communities where more is said in the silence than in the spoken. An example that comes to mind involves healthcare and African American men. Studies show that in the exam room, less is spoken, depending on the provider and their background and the trust between provider and patient.  

If we’re going to use AI for equity, then we have to understand the subtleties. We need to bring the edge into the mainstream. And, by the way, we need to make sure there are market incentives (for the VCs, private equity, regulators, and others who shape the market) to make sure we bend AI to our will.   

###

The excerpt above has been lightly edited and condensed. Watch Trooper’s full keynote at the Data is for Everyone: A Data Science for Social Impact Summit here.