FundsLong-Term Future Fund
Long-Term Future Fund
We make grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. We also seek to promote longtermist ideas and increasing the likelihood that future generations will flourish.
Impact
The Long-Term Future Fund has recommended several million dollars' worth of grants to a range of organizations, including:
About the fund
The Fund has historically supported researchers in areas such as cause prioritization, existential risk identification and mitigation, and technical research on the development of safe and secure artificial intelligence—where it was among the first funders. Most of our fund managers have built their careers working full time in areas directly relevant to the Fund’s mission.
The Fund managers can be contacted at longtermfuture[at]effectivealtruismfunds.org
Focus areas
The Fund has a broad remit to make grants that promote, implement and advocate for longtermist ideas. Many of our grants aim to address potential risks from advanced artificial intelligence and to build infrastructure and advocate for longtermist projects. However, we welcome applications related to long-term institutional reform or other global catastrophic risks (e.g., pandemics or nuclear conflict). We intend to support:
- Projects that directly contribute to reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects
- Training for researchers or practitioners who work to mitigate existential risks, or help with relevant recruitment efforts, or infrastructure for people working on longtermist projects
- Promoting long-term thinking
Featured grants with outstanding outcomes
Logan Smith
$40,000
2024 Q3
6-month stipend to create language model (LM) tools to aid alignment research through feedback and content generation
$40,000
2024 Q3
Robert Miles
$121,575
2023 Q3
1-year stipend to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved
$121,575
2023 Q3
Jeffrey Ladish
$98,000
2023 Q1
6 month salary & operational expenses to start a cybersecurity & alignment risk assessment org
$98,000
2023 Q1
Gabriel Mukobi
$40,680
2023 Q3
9-month university tuition support for technical AI safety research focused on empowering AI governance interventions
$40,680
2023 Q3
Sage Bergerson
$2,500
2022 Q4
5-month part time stipend for collaborating on a research paper analyzing the implications of compute access with Epoch, FutureTech (MIT CSAIL), and GovAI
$2,500
2022 Q4
Payout reports
Payout date
Total grants
No. of grantees
Payout report
2024 Q2
$5,363,105
141
2023 Q2
$12,600,000
327
2021 Q4
$2,123,577
34
2021 Q3
$982,469
2021 Q2
$1,650,795
Payouts over time
Applications
Why donate to this fund?
The future could include a large number of flourishing humans (or other beings). However, it is possible that certain risks could make the future much worse, or wipe out human civilization altogether. Actions taken to reduce these risks today might have large positive returns over long periods of time, greatly benefiting future people by making their lives much better, or by ensuring that there are many more of them. Donations to this fund might help to fund some of these actions and increase the chance of a positive long-term future.
Many people believe that we should care about the welfare of others, even if they are separated from us by distance, country, or culture. The argument for the long-term future extends this concern to those who are separated from us through time. Most people who will ever exist, exist in the future.
However, the emergence of new and powerful technologies puts the potential of these future people at risk. Of particular concern are global catastrophic risks. These are risks that could affect humanity on a global scale and could significantly curtail its potential, either by reducing human civilization to a point where it could not recover, or by completely wiping out humanity.
For example, tech companies are pouring money into the development of advanced artificial intelligence systems; while the upside could be enormous, there are significant potential risks if humanity ends up creating AI systems that are many times smarter than we are, but that do not share our goals.
As another example, previous disease epidemics, such as the bubonic plague in Europe, or the introduction of smallpox into the Americas were responsible for many millions of deaths. A genetically-engineered pathogen to which few humans had immune resistance could be devastating on a global scale, especially in today’s hyper-connected world.
In addition to supporting direct work, it’s also important to advocate for the long-term future among key stakeholders. Promoting concern for the long-term future of humanity — within academia, government, industry, and elsewhere — means that more people will be aware of these issues, and can act to safeguard and improve the lives of future generations.
Why you might choose not to donate to this fund
01You don’t think that we should focus on the long-term future02You don’t think that future or possible beings matter, or that they matter significantly less03You have a preference for supporting more established organizations04You are pessimistic about room for more funding05You have identified projects or interventions that seem more promising to you than our recommendations06You are skeptical of the risks posed by advanced artificial intelligence07You have different views about how to improve the long-term future
Donors might conclude that improving the long-term future is not sufficiently tractable to be worth supporting. It is very difficult to know whether actions taken now are actually likely to improve the long-term future. To gain feedback on their work, organizations must rely on proxy measures of success: Has the public become more supportive of their ideas? Are their researchers making progress on relevant questions? Unfortunately, there is no robust way of knowing whether succeeding on these proxy measures will cause an improvement to the long-term future. Donors who prefer tractable causes with strong feedback loops should consider giving to the Global Health and Development Fund.
Fund managers

Caleb Parikh
Project Lead

Linchuan Zhang
Fund Manager

Lawrence Chan
Fund Manager

Thomas Larsen
Fund Manager

Oliver Habryka
Fund Manager

Daniel Eth
Independent
Fund Manager

Eli Lifland
Guest Fund Manager
Fund advisors

Nicole Ross
Fund Advisor

Jonas Vollmer
Fund Advisor at Effective Altruism Infrastructure and Long-Term Future Fund

Catherine Low
Fund Advisor

Asya Bergal
Fund Advisor