Above: Attendees collaborating on projects at the AI for Good workshop at ICT4D Conference in Uganda.
Last week, I had the opportunity to participate in several discussions about AI for Good at the ICT4D Conference in Uganda and lead a workshop focused on practical application of AI in nonprofit sector. Here are four observations from the discussions:
- AI can help us do good better. While it’s early days for AI in nonprofit sector, the potential for AI to transform every aspect of our work is real - from field programs to driving digital transformation within the organizations by improving processes, creating efficiencies, increasing reach and effectiveness. According to McKinsey’s analysis of about 160 AI social impact use cases, adding AI to the solution mix for at least 10 domains could have large-scale social impact. It can help us make decisions and act faster in emergencies, reach more people (like refugees) with services and information they need, predict infectious disease outbreaks or famines. Doing good better with AI will be manifested in both Improvements to existing programs and processes (e.g., automating tasks) and in the creation of solutions that would not be possible without AI by tapping into large pools of unstructured data.
- Value will come from practical implementations solving real problems, in a sustainable way. The application of AI in nonprofit sector is still very nascent with many examples in proof-of-concept stage but no significant, sustainable impact yet or path to scale beyond the initial scope. To realize the value of AI in social impact space, our work needs to shift from one-off, siloed, tech-driven piloting to problem-driven, sustainable, inclusive solutions.
- Nonprofit sector needs knowledge, resources, and processes to benefit from AI. I’ve spent last 12 months exploring AI for Good and talking to both nonprofit and tech experts, and three things have come up consistently as the requirements for ensuring that AI solutions benefit all, in an ethical and sustainable way: (1) knowledge, starting with an adequate understanding of how AI can help us in our work and what questions to ask, (2) resources, including access to ecosystems of expertise, funding, and technology tools; and (3) reusable processes and frameworks, for evaluating AI for social impact space, ethical design, auditing for bias, scaling beyond pilots. This is why NetHope has established an Emerging Technologies Working Group — a space where NetHope community can learn, share and collaborate.
- Nonprofits have a responsibility to understand AI and know what questions to ask. We need to transition from talking mostly about the potential of AI to spending more time figuring out ‘what do we need to do today to ensure that the benefits of technological innovation are broadly distributed and have a positive impact?’ and ‘how do we prepare nonprofits and communities to use AI to solve some of the world’s challenges in an ethical and sustainable way?’
I believe that we DON'T have to be experts in AI or data science but we DO need to know what questions we should be asking when evaluating the need for AI in our work and its effects on the outcomes including ethical considerations.
|ICT4D panel on Ethical, Empowering, and Responsible AI with Plan International, USAID, and NetHope.|
In the AI workshop at ICT4D, we introduced and tested—in a hands-on exercise—a framework designed to help those in our sector interested in exploring AI and incorporating it into their work know what questions to ask at each stage. This framework draws on the insights from the past and current implementations of AI in our sector as well as the engagement with technology experts and researchers. It has been informed by a whole set of stakeholders including NetHope NGO members, MIT, UC Irvine, and USAID. The framework offers a menu of critical questions to consider when exploring AI - from defining the opportunity (‘Should you even use AI?’) and evaluating data and bias, to resourcing, implementing, and maintaining AI-based solutions.
In the workshop, participants used the framework to evaluate the opportunity of using AI to address the issues like malnutrition and malaria, and to meet the needs of refugees and children with disabilities, guided by questions like:
- Define the opportunity: What problem are you trying to solve? How is the problem being addressed today? Why is AI better than the current solution?
- Evaluate data and bias: Do you have and/or can you get lots of data for this problem (e.g., image, text, audio, video)? What are the potential biases that AI may introduce or amplify in your context?
- Resource the solution: What infrastructure do you need for the solution? What’s your strategy for getting ‘missing’ resources or training existing resources?
- Implement the solution: Do you have resources to continuously clean up the data and to ensure that the data is representative of the problem set and target audience?
- Maintain and extend the solution: What resources do you have in-house or relationships with technology experts (including vendors and academia) to: (1) Fix issues that arise with the solution; (2) Update the solution to fit changing conditions/data; (3) Extend the solution to new contexts?
|AI for Good workshop participants at the ICT4D Conference in Uganda.|
We’ll continue to evolve the framework and use it at future workshops, including the NetHope workshop at the AI for Good Global Summit at the UN in Geneva on May 31st. Email me if you are interested in attending the workshop.
I would like to thank several colleagues who were actively involved in and contributed to the AI for Good sessions at ICT4D, including: Aubra Anthony (USAID), Amit Gandhi (MIT), Steve Hellen (CRS), Nora Lindstrom (Plan International), Neal Sahota (UC Irvine), Kristin Tolle (Microsoft), and Hycinth Umaran (Plan International).