Technical.ly: These Philly orgs are getting serious about making sure they integrate AI responsibly

From nonprofits to data centers, companies and nonprofits are convening groups to work out best practices for artificial intelligence.

By: Sarah Huffman

Originally published in Technical.ly on February 20, 2024

AI is more accessible than ever, and while the constantly changing technology can be helpful, it can also raise major concerns about racial biases and privacy. Now Philly-area organizations are convening groups to make sure their AI implementations do more help than harm.

That’s especially important when it comes to data around public benefits, like financial aid and food assistance.

“We believe that one of the most important ways to bring that about is for the public benefit system to be more intelligent, to be more modern, and artificial intelligence can play an important role in that,” Trooper Sanders, CEO of Benefits Data Trust (BDT), told Technical.ly.

The Center City-based national nonprofit already uses AI to manage data, and created an AI chatbot to help students apply for financial aid.

Last month it launched the Trustworthy AI and Human Services Learning Hub, a group designed to address questions about responsible use of artificial intelligence and determine what its role could be in the public benefits system.

The group will formally kick off later this spring and continue to work into next year. Right now BDT has an open call for individuals and organizations that want to participate. Ideally, the group will have a range of participants, including people in government who administer public benefits, people developing AI tech, and people from community advocacy organizations.

The Learning Hub will eventually identify use cases where AI could be used in a public benefits setting and test how it works. Potential use cases include a chatbot that helps public benefits workers interact with the public. AI could also take care of repetitive tasks that free up more human workers to determine if people are eligible for benefits.

“Then really put it through this testing screen from a technical point of view,” Sanders said, “from a policy point of view and also from an organizational point of view.”

Creating ‘a conscientious and ethical framework’

Last year, Montgomery County-based global financial services company SEI launched a similar group. SEI’s internal AI learning group is focused on educating its employees about the uses of AI in the fintech and investment space.

At King of Prussia-based Qlik, the advisory group is being given a more formal title: the AI Council.

Announced earlier this year, the council will be tasked with helping guide the data integration, data quality and analytics company and its clients in responsibly using AI, according to Qlik Chief Marketing Officer Chris Powell.

Its formation stemmed from findings in a study Qlik released last year. The Generative AI Benchmark Report surveyed company leaders, and found a large percentage planned to incorporate more AI into their work, but weren’t aware of what Qlik calls the right “data strategies” to do so.

So Qlik convened its AI Council, selecting four experts in AI ethics, development and application to participate.

The group, which will specifically focus on data integration, quality, governance and integrity, will meet with Qlik’s leadership, the heads of business units and technical experts to review how their recommendations practically fit. The council will also meet with Qlik’s research and development teams on new AI tools that Qlik develops.

“The AI Council is instrumental in ensuring that Qlik not only advances in AI technology but does so with a conscientious and ethical framework,” Powell said. “This initiative is a testament to our commitment to responsible innovation, emphasizing the need for a holistic view of AI that encompasses not just the technology itself but the entire data lifecycle.”