AI4Good is a somewhat new field that sits at the intersection of big tech companies and the world’s biggest problems. There is a lot of experiments and research in different areas, including the international development sector to address the 2030 development goals, emergency and humanitarian aid, medical fields and many others. Partnerships with technology companies are driving AI4Good, providing their expertise, toolkits, and resources to work with nonprofits to integrate the use of AI. This includes Google, Microsoft, SalesForce, Intel, IBM, and many other companies. There are also research reports, events, fellowships, and bootcamps, and an annual AI4Good Global Summit.
Last year, Google made an open call to organizations around the world to submit their ideas for how they could use AI to help address these social challenges. The Google AI Impact Challenge attracted 2,602 applications from six continents and 119 countries, with projects addressing a wide range of issue areas, from education to the environment.
Less than 1% of the applicants, 20 organizations were selected to receive coaching from Google’s AI experts, Google.org grant funding from a $25 million pool, and credits and consulting from Google Cloud. They will have the opportunity to join a customized 6-month Google Developers Launchpad Accelerator program, including guidance from Google’s nonprofit partner, DataKind, to enhance their capacity.
Google has just released a report called “Accelerating Social Good with Artificial Intelligence,” that offers insights gathered from all 2602 applications and includes an extensive taxonomy of AI4Good projects. With a growing interest in the “AI4Good” field, Google is sharing this information to help strengthen the ecosystem of people and organizations hoping to solve some our most pressing social challenges with AI.
The key findings in the report are listed below. The report provides detail about the opportunities and challenges and offers some next steps for social good organizations, technology companies, and policy makers.
- Machine learning is not always the right answer.
- Data accessibility challenges vary by sector.
- Demand for technical talent has expanded from specialized AI expertise to data and engineering expertise.
- Transforming AI insights into real-world social impact requires advanced planning.
- Most projects require partnerships to access both technical ability and sector expertise.
- Many organizations are working on similar projects and could benefit from shared resources.
- Organizations want to prioritize responsibility but don’t know how.
One valuable section of the report is a catalog of common project designs across different sub-areas of the social change sector, including: Crisis Response, Economic Empowerment, Education, Environment, Equality and Inclusion, Health, and Public Sector. Each area describes the project idea, type of AI technology and data sources. This information can help set the stage for potential partnerships and field-wide sharing of knowledge.
The report includes a set of recommendations for organizations that want to use AI to solve social good problems, tech companies, and policy makers. For organizations, the guidelines include:
- If technical expertise is needed to scope an AI project, reach out to organizations or individuals with that expertise to pressure test whether there is a faster, simpler, cheaper alternative.
- Identify owned datasets that can be safely open-sourced or shared through data governance structures such as whitelists and data trusts.
- For organizations aiming to both create and implement the technology, develop your AI systems and implementation plan with frequent user testing and feedback from target beneficiaries and organizations working with these populations.
- Have a clear understanding of your own strengths and limitations related to applying AI and developing potential partnerships.
- Invest in responsible open-sourcing to share intellectual property (e.g., models and web and mobile applications), and share these investments with existing sector associations.
- Have a clear idea of the responsibility guidelines to be followed
- Where possible, make transparent modeling decisions and use transparent data collection methods to allow others to pressure test for responsible use of your technology.
- Engage a diverse set of stakeholders, including affected populations, to discuss potential risks and mitigations.
- Evaluate model performance across different dimensions that may highlight areas of unfair bias (e.g., different demographics).
- Develop a risk-mitigation plan for potential areas of harmful use or unintentional misuse.