I was honored to be invited to cover this year’s Global Philanthropy Forum, a community of donors and social investors committed to international causes. The annual conference helps inform, enable and enhance the strategic nature of their work. This year’s conference, “The Future We Make” had four themes – strategic philanthropy, modern day slavery, agriculture and nutrition, and education, learning, and employment.
I went to cover one session, “Building An Information Infrastructure: Unlocking Data for Philanthropy.” with Jacob Harold, Guidestar; Robert Kirkpatrick, UN Global Pulse; Mayur Patel, Knight Foundation. Darin McKeever from the Bill and Melinda Gates Foundation (and Markets For Good) was the moderator.
What follows are my notes, but if you want to spend more time learning about this topic, I’ve pulled together these additional curated resources:
1) Curated Tweets
2) Curated Blog Posts and Resources that provide context to the discussion or resources mentioned
3) Social Network Analysis of the #GPF13 tag during the discussion created by Marc Smith
Notes from Session
Darin McKeever started the discussion with some framing of the topic – some of the ideas he shared are addressed in his recent post on Forbes called “Moving from Big Data to Big Wisdom,” that was published as part of debate “Can Big Data Have Social Impact?” For further context on mindset and work practice changes in philanthropy to embrace data, here’s some points from Lucy Bernholz.
The topic of data is a geeky subject, but exciting because in this era of big data and networks, our collective capacity for problem solving has never been greater. Data is transparent because of social media, data visualization, and open data from governments – all helping us to usher a remarkable period in human history. On demand access to trusted information for every realm of decision-making is available, and yet for all the technology progress – many in our social sector find ourselves at loss. Why are we using data that is 3 years old? Philanthropy needs to understand how to make best use of open data sets and increasingly disintermediated data that is hard to find. He compared nonprofits to the business sector. Nonprofits and donors find it time consuming to find and analyze meaningful data to help outcomes. Many in the sector are at a loss to where they can tap into or validate information that the for-profit sector is collecting. Today, more than ever, there are opportunities to collect, share, and analyze data, but there are also challenges. Jacob Harold, Guidestar, shared some thoughts to this question, “What are the basic building blocks to facilitate better access to data?” He talked about scaffolding as intentional effort to change something. He acknowledged that the information or data scaffolding in our sector is complicated. “How are we going to act in a VUCA world?” Is it so crazy that we just trust our intuition or can we inform our intuition with data? He talked about how much data we have at our finger tips, but the challenge of making it meaningful.
He referenced that there are over 371 platforms for social change data. That’s a good thing because lots of people are addressing social issues, but it is also a nightmare? How do we weave it together? He mentioned several projects underway to weave data together in our sector. Guidestar’s piece of the problem that is about individual nonprofits, but there are many other pieces to the puzzle. “Guidestar has had a monopoly and it needs to be an open one.” He also talked about how the need to package the data better for the sector to use it, especially organizations that do not have the skills, capacity, or resources to hire a data scientist.
Global Pulse @rgkirkpatrick
Kirkpatrick gave an overview of how data is being used for international development. They are using both big data, data mining, predictive analytics, and “big listening.” He talked about how the private sector is using this sophisticated analysis in real-time, but his organization is using data to answer the question: How can we measure human well-being with this real-time data? They use big data to gather time-sensitive information so their programs can act quickly, for policy change, but how to make this a reality for more in the sector.
They are trying to get companies to share data mining, for example mining Twitter and FB to detect disease outbreaks. They have been engaging with corporations in “Data Philanthropy” to get the raw data available – can be used to analyze the impact of programs. “Any reuse is a risk of misuse.” But he emphasized that Big Data has a role in public good, but we need a way to convert it into useful data. The UN is a neutral platform, trying to learn in the ground, setting up labs in partnership with government. They are convening geeks, policy wonks, security analysts, and others to do experiments. For example, they have some predictable analytics on how people use their mobile phones and this helps understand whether or not they have jobs. They recently launched lab in Indonesia and started to hear through social media chatter that vaccine serum had meat and that people should not get vaccines. This was a misperception and they used that to go out into the field to educate folks. You can find more examples and case studies in this white paper . Mayur Patel, Knight Foundation Patel raised some great questions: What is the role of philanthropy – how can it guide scaffolding? How do we use private resources for advancing public good? Data has to be framed with this. What is philanthropy was held accountable for stewarding the data – how it has been collected and organized.
He talked about data itself being a public good. The data itself is the product, it is a new asset class. What does that mean for foundations and the social sector?
Data as an outcome or asset. What is the role that philanthropy can take that can create the incentive for people to use it. He talked about the consequences of not valuing data as a outcome with a story about EveryBlock. It is not just the software and technology platform, it is the data. How do we protect data streams that in the public interest?
The data streams through nonprofit activities are not just by-products – they are important. We to think about our relationship with those new data streams? May be generated as a by-product of activity. He suggested the need for open source licenses for data and research should be available openly within a year. IssueLabs is a great start.
He also asked about the capacity of nonprofits to use the data once it is opened up? Jacob Harold essentially asked, “If we build it, will they come?” Patel observed that “Not every nonprofit will have the capacity to hire a data scientist.” While there are great resources to find them to volunteer, there needs to be more shared resources and capacity building. In addition to capacity building, there needs to be standardization, tools, and visualization resources. No infrastructure to do this! Patel pointed out that we need to think back to twenty years ago when using the Internet was new – and what systems wide infrastructure and capacity building systems did the sector put into place?
This was a stimulating session about data in the social sector and what is needed to make it actionable and useful. The idea is geeky as Darin points out, but also exciting.
What you think is needed for nonprofits to embrace the data? What systems level interventions are needed to improve the nonprofit sector’s capacity to embrace data — not only collect, but to make sense and apply?