Foundations Aim to Utilize AI for Positive Change and Safeguard Against Potential Dangers
Philanthropists, including well-established foundations and tech billionaires, are increasing their grants in response to the concerns raised by technology experts about the rapid development of artificial intelligence.
Much of the philanthropy focuses on so-called technology for good or “ethical AI,” which explores how to solve or mitigate the harmful effects of AI systems. Some researchers believe that artificial intelligence can be used to predict climate disasters and find new medicines to save lives. Others warn that big language models could soon topple white-collar professions, feed misinformation and threaten national security.
What philanthropy can do to the trajectory of artificial intelligence is beginning to show. Billionaires who made their fortunes in technology are more likely to support projects and institutions that highlight the positive outcomes of AI, while foundations without tech money tend to be more focused on the dangers of AI.
For example, former Google CEO Eric Schmidt and wife Wendy Schmidt have committed hundreds of millions of dollars to AI grant programs housed at Schmidt Futures to “accelerate the next global scientific revolution.” In addition to committing $125 million to advance AI research, the philanthropic company last year announced a $148 million program to help PhD students apply AI to science, technology, engineering and math.
Also in the AI enthusiast camp is the Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group and one of the few philanthropies that has made AI and data science a clear grantmaking priority. In 2021, the foundation committed $40 million to help nonprofits use AI and data to advance “their work to protect the planet, advance economic prosperity and secure healthy communities,” according to a foundation release. McGovern also has an in-house team of AI experts to help nonprofits use the technology to improve their programs.
“I’m incredibly optimistic about how these tools will improve our ability to advance human well-being,” says Vilas Dhar, president of the Patrick J. McGovern Foundation. “I think it’s up to philanthropy and civil society to make sure we understand this promise and opportunity — to make sure these technologies don’t become just another profit-making sector in our economy, but are invested in advancing human equality.” Salesforce is also interested in helping organizations use AI. announced last month that it would grant $2 million to education, labor and climate organizations “to advance the fair and ethical use of trusted artificial intelligence.”
Billionaire entrepreneur and LinkedIn co-founder Reid Hoffman is another major donor who believes AI can improve humanity and has funded research centers at Stanford University and the University of Toronto to achieve this goal. He argues that artificial intelligence can positively transform areas such as healthcare (“give everyone a physician’s assistant”) and education (“give everyone a tutor”), he told the New York Times in May.
However, the enthusiasm of tech billionaires for artificial intelligence solutions is not uniform. eBay founder Pierre Omidyar has taken a mixed approach through his Omidyar Network, which awards grants to nonprofits that use technology for scientific innovation, as well as those that seek to protect data privacy and advocate for regulation.
“One of the things we’re trying really hard to think about is how do you have good AI regulation that’s sensitive both to the kind of innovation that needs to happen in this space and also sensitive to public accountability systems,” says Anamitra Deb, CEO of the Omidyar Network.
Grantmakers who are more skeptical or negative about artificial intelligence are also not a homogeneous group; however, they are usually non-tech foundations.
The Ford, MacArthur and Rockefeller foundations are among several grant makers that fund nonprofits researching the harmful effects of artificial intelligence.
For example, computer scientists Timnit Gebru and Joy Buolamwini, who conducted seminal research on racial and gender bias using facial recognition tools — prompting Amazon, IBM and other companies to adopt the technology in 2020 — have received substantial grants from them. and other large, established foundations.
Gebru founded the Distributed Artificial Intelligence Research Institute in 2021 to study the harmful effects of artificial intelligence on marginalized groups, “the spillover effect of free Big Tech”. The institute raised $3.7 million in initial funding from the MacArthur Foundation, the Ford Foundation, the Kapor Center, the Open Society Foundations, and the Rockefeller Foundation. (The Ford, MacArthur and Open Society foundations are financial supporters of the Chronicle.)
Buolamwini continues to research and advocate for artificial intelligence and facial recognition technology through his Algorithmic Justice League, which also received at least $1.9 million in support from the Ford, MacArthur, and Rockefeller foundations, as well as the Alfred P. Sloan and Mozilla foundations.
“I think all of these people and organizations have had a really profound impact on the AI field, but also really caught the attention of policymakers,” says Eric Sears, who oversees MacArthur’s AI-related grants. The Ford Foundation also launched the Disability x Tech Fund through Borealis Philanthropy, which supports efforts to combat bias against people with disabilities in algorithms and artificial intelligence.
There are also artificial intelligence skeptics among the grant-giving technology elites. Tesla CEO Elon Musk has warned that artificial intelligence could lead to the “destruction of civilization”. In 2015, he gave $10 million to the Future of Life Institute, a nonprofit that aims to prevent the “existential risks” of artificial intelligence, and led a recently published letter calling for a halt to AI development. The Open Philanthropy Foundation, founded by Facebook founder Dustin Moskovitz and his wife Cari Tuna, has provided the majority support to the Center for Artificial Intelligence Safety, which also recently warned of the “risk of extinction” associated with artificial intelligence.
A significant part of artificial intelligence foundations is also directed to universities researching ethical issues. The Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvard’s Berkman Klein Center, received $26 million between 2017 and 2022 from Luminate (Omidyar Group), Reid Hoffman, the Knight Foundation, and the William and Flora Hewlett Foundation. . (Hewlett is a financial supporter of the Chronicle.)
According to the May 2022 report, the goal was to “ensure that automation and machine learning technologies are researched, developed, and deployed in a way that reinforces the social values of justice, human autonomy, and justice.” One university funding comes from the Kavli Foundation, which in 2021 committed $1.5 million a year for five years to two new centers for scientific ethics with a focus on artificial intelligence at the University of California, Berkeley, and the University of Cambridge. . The Knight Foundation announced in May that it would spend $30 million to establish a new ethical technology institute at Georgetown University to inform policymakers.
While hundreds of millions of philanthropic dollars have been committed to ethical AI efforts, influencing tech companies and government remains a formidable challenge.
“Philanthropy is just a drop in the bucket compared to the Goliath-sized tech platforms, Goliath-sized AI companies, Goliath-sized regulators and policymakers who can actually intervene,” says Deb Omidyar of the Network.
Even with these hurdles, foundation leaders, researchers, and advocates largely agree that philanthropy can—and should—shape the future of AI.
“Industry is a dominant factor not only in the scope of AI system development in the academic space, it is also shaping the research field,” said Sarah Myers West, CEO of the AI Now Institute. “And because policymakers really want to hold these companies accountable, it’s important that funders step in and provide support to frontline organizations to ensure that the broader public interest is addressed.”