Philanthropy and innovation - how could open data and artificial intelligence help funders do better?

Philanthropic foundations are not always the best places for innovation. They can be risk averse, bureaucratic, hierarchical, and cliquey.

Some of the biggest ones are the most secretive. Searching for precise facts about what many famous foundations support and why is difficult and, paradoxically, some of the foundations created by the new digital business tycoons are the most traditional, and opaque, in their methods of work.

But a significant minority of funders are working to open things up, to adopt new methods and act in the more accountable ways they would want their grant recipients to. As a result it’s now possible to see how philanthropy could become a lot more data-driven and better at learning.

A debate is beginning to gather steam about everything from new tools aimed at a mass market - like AI-based philanthropic advice, or fund-raising tools (like the Arthritis Research UK partnership with Microsoft on using AI to target pop-ups to potential donors). There’s also an emerging debate about better ways of linking data sets- as in this interesting speculation from last year - and there are some intriguing, if still embryonic, projects like the Open Philanthropy Project.

Here I suggest potential innovations that could transform how the bigger funders - and we at Nesta - might work in the future. I’m setting them out to encourage critical and constructive comments. I’m also hoping for constructive engagement from other funders on how we could cooperate to push the field forward. The blog covers how funders could use data; better sift and assess applications; reduce bureaucracy for applicants; strategically scan different fields; and tap into crowd knowledge.

Open data as the default

The first step is to move further towards open data as the default. In the UK, the 360 Giving programme has done this for grants, with a strong coalition (including Nesta) now committed to providing information in an open, machine readable form.

The big challenge in all fields is to make data not just open but also of a good enough quality to be useful (the open data world rarely talks about just how much labour goes into cleaning up data-sets to make them useable). In parallel, much more work will be needed to create classifications that would make it easier to use data, tracking not just grants, locations and timescales but also linking to and layering other datasets around:

  • Organisational growth - better tracking how well charities and social enterprises that are supported grow over time (ideally with links into banking data, a great underused treasure trove, and public data held by bodies like the Charity Commission)
  • Take up of ideas - tracking how novel ideas are spread and adopted
  • How funded activities relate to scales and patterns of need eg. indices of deprivation or more specific needs
  • How well evidenced activities are eg. by level on standards of evidence
  • …and a full range of impact measures

For all of this we need more energetic action - and more engagement from boards to ensure that the data really helps them answer the questions that matter. The 360 Giving standard is designed with this extensibility in mind but hasn’t yet been tested.

Smarter sifting

The next option is to explore the use of machine learning to improve the speed and quality of sifting of initial applications. Machine learning should be able to automate some of the most time-consuming elements of grant-making processes, teaching itself by analysing training data of previously successful and unsuccessful applications, and learning to screen, grade, and rank the strongest applications.

It could steadily learn to spot what key attributes tend to lead to ultimate funding. These would reflect the cultural and other priorities of different foundations (and of course their biases - a problem with any kind of machine learning). But machine learning could also be used to counter unconscious biases, for example, being programmed to ignore or prefer demographic information, such as applicant’s genders, countries of origin, educational background, and the like.

As a first step, machine learning could be more efficient in doing first sifts – e.g. taking 10,000 applications down to a short list of 100. It could also help foundations give data driven feedback to applicants, helping them understand where they had fallen short. Analysis of past data - and performance - might also help identify which kinds of applicants most need which kinds of support, such as non-financial help. It might show, for example, through LinkedIn data, where high turnover precedes problems. And more systematic analysis could also show up some of the human behavioural biases in grant-making, like sunk cost effects and loss aversion.

Another approach to this data would focus on understanding patterns of need and demand. Some insights will come from support services like Citizens Advice Bureau (see the various examples in Nesta’s study on data for good). But unsuccessful application data could be another source, at least for the larger scale funders (like the Big Lottery Fund in the UK).

Smarter applications

AI could also help to overhaul the application process itself. Instead of written forms which tend to favour highly educated applicants, or a small industry of consultants, an alternative would be to use technology to increase accessibility by avoiding the usual written formats, for example through a structured interview process using speech to text, asking applicants to describe key aspects of their work.

Chatbots can be programmed to ask structured interview questions about proposed projects, for example covering team structure, budget management, marketing and impact. Chatbots could also improve the applicant experience by providing immediate feedback, and orchestrate communication during the application process - including the post-submission period - without running the risk of applicants and foundation staff communicating directly and tilting outcomes.

This would be worth doing as an experiment anyway, building on past experiments that used video rather than text for applications. These proved useful at overcoming the bias to those most fluent in prose and also gave a flavour of passion, enthusiasm and authenticity (though, of course, it would be vital to retain the human element of videos alongside machine learning).

Smarter shared handling of data

Another useful innovation, much talked about over the years, is to create common data repositories for grant applications so that applicants don’t have to rewrite the same material endlessly. A common grant application digital format would allow organisations to maintain, manage and reuse their proposals for submission to multiple funding opportunities. The creation of a secure common repository for Global Common Grant Applications (inspired by this) would simplify workflows for both the applicants as well as the philanthropies, as it can serve as both an end to end grant application tool and as a kind of grant proposal marketplace that would be searchable.

Guidestar’s development of Financial SCANSM is another interesting approach which aims to give insights into financial health by combining GuideStar's IRS Forms 990 data and Nonprofit Finance Fund (NFF) expert analysis in an easy-to-use tool with shareable formats.

APIs should make it possible for organisations to carry core data on their own website and then automatically upload it into donor organisations, or for a donor organisation to act as prime repository for others in a ‘grantshare’ scheme if the fully shared pool isn’t immediately feasible.

Smarter scans of needs and issue areas

A more strategic task would be to use tech tools to help scan a field to understand who is doing what in order to provide funders with a better sense of where they can add value, for example, in funding girl empowerment projects in Mali, projects for homelessness in Marseilles or reducing elderly isolation in Birmingham.

This has been much discussed over many years but with relatively little progress. There are a few good exceptions, such as Washfunders.org which maps funding for water sanitation and related fields, and there has been some mapping of local shared funding.

In each case it should be feasible to map what is being funded, at what scale and with what focus, timescale and desired outcomes. The kind of work Nesta has done to map innovation ecosystems, points to what is possible; combining data sharing and web-scraping (this note by Juan Mateos-Garcia summarises these). We’re hoping to do similar work on particular sectors – such as peer to peer in healthcare.

The Chan-Zuckerberg Initiative (CZI) has bought a startup called Meta to synthesise large amounts of research evidence, which could also be useful, as could more commitments by big tech companies to share data with charities in disasters.

Again, these scans would be most useful with some shared impact or effect metrics, but, as a first step, simply mapping what funding is going in, and where there is joint funding, would be useful.

The aim of scans would be to reveal funding deserts; topics that have been missed out or have become unfashionable (for example by comparing funding priorities against applications, or other measures of need).

There is an obvious overlap with the sort of methods now being used in area-based collaborations (this paper from last year summarises experiences over the last few decades, and the recent use of the term ‘collective impact’ as a label for them).

Another aim would be to avoid the ‘winner takes all’ syndromes where funders tend to back the same projects (a particularly common problem for the international funders focused on social entrepreneurs).

But the main value would be to support strategic conversations of groups of funders on what to support, where to collaborate and how to find suitable niches.

For either purpose, a key will be to find visualisations that make this easy to use in decision making – the risk will be to amass large quantities of data that are neither digestible nor useable (see this from Cath Sleeman with examples of good visualisations).

Getting this right will be hard. The Digital Health Atlas is a Global web-platform to curate digital health implementations, supporting Ministries of Health, technologists and implementers to map, monitor, and foster digital health innovation investments to meet Government health goals. It’s ambitious but has had little use, partly because it’s too complex.

So, any future projects need to work hard to achieve some consensus on the utility and functions of the scan/data repository, how to standardise and harmonise the data streams, organise the backend data storage repository, APIs for both data pull and push and user experience (for example with on-demand dashboard overview or automated periodic insight reports based on their funding priorities).

Mobilising crowd intelligence

A final set of options would deliberately mobilise crowd intelligence to help identify priorities or decide on grants. Again, this would probably only be relevant to the largest funders. But these could experiment with large scale consultation on priorities.

Nesta showed one way of doing this when it opened up the topic of the new Longitude Prize to a public vote (and was helped by a close collaboration with the BBC, Amazon and others to secure a big public vote, and a lot of serious debate on the options).

Open processes could also be used to comment and rank on potential grant recipients - adapting some of the tools now being used by cities for participatory budgeting and democracy, including AI-driven tools like Polis. There are plenty of risks with these methods - of capture, gaming and a bias against important but unpopular issues. But it would be unfortunate if philanthropy didn’t even experiment with tools of this kind.

Some next steps

I hope this post will elicit several kinds of response:

From foundations who are already doing or planning the things described here – if they are, I would urge them to share their lessons and the technology they are using, making it open source wherever possible. A host of complex issues are bound to arise over the next few decades - around governance, bias and ethics. The more openly these can be handled the better.

From foundations who don’t yet have an answer but do have an appetite to jointly fund some experiments, using more open data and AI for the sorts of purpose described above. We could aim to get proof of concept studies underway, ideally with a really big funder, and we could aim to align, and bolster, policies on requiring transparency from grant recipients.

From innovators and practitioners, intermediaries and brokers, with suggestions as to how these might be done, good examples to copy or open source tools to adapt

From anyone interested in more joint commitment to common methods and standards, such as open data and evidence standards. As in many fields of technology, it’s probably better to have a single standard that is 70 per cent perfect than dozens of standards which aspire to be 100 per cent perfect.

Philanthropy has the great virtue of freedom, but the great vice of little accountability. Ingrained habits may push against using new methods that could surface blindspots, biases and failures. But power should bring with it responsibility and we all in the long run benefit when systems are better able to learn.

This blog has benefited from inputs from Nesta colleagues including Alice Casey and Celia Hannon, and from Rose Shuman and Mita Paramita at Bright Front Group

Author

Geoff Mulgan

Geoff Mulgan

Geoff Mulgan

Chief Executive Officer

Geoff Mulgan was Chief Executive of Nesta from 2011-2019.

View profile