Our mission is to help build a world guided by reason and compassion for all sentient beings. We make both grants to nonprofits and investments in socially beneficial companies.
Our current focus areas are outlined below.
Artificial Intelligence
AI governance and policy
Advanced AI could have enormous benefits but could also pose catastrophic risks. We’re particularly interested in AI governance and policy to prevent the existential misuse of AI, such as through compute governance and information security.
AI welfare
As AI systems advance, they may develop sentience and require moral consideration. The NYU Center for Mind, Ethics, and Policy is one of our grantees in this area.
Cooperative AI
We’re interested in promoting cooperation between advanced AI systems. The Center on Long-Term Risk and the Cooperative AI Foundation are among our grantees in this area.
Societal Long-Term Risks
Reducing risks from fanatical ideologies
Extremist ideologies—including fascism, totalitarian communism, and religious fundamentalism—have contributed to many of history’s worst atrocities. We support projects that aim to reduce the risks posed by such fanatical ideologies, particularly those working to uphold core Enlightenment values or prevent powerful technology falling into the wrong hands.
Reducing risks from malevolent actors
Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. We support projects that aim to reduce the risks posed by such malevolent actors.
Animal Welfare
Non-human animals are likely sentient and worthy of moral concern. Both farmed and wild animals likely undergo great suffering in huge numbers. We are interested in interventions to reduce animal suffering, including that of especially neglected populations such as farmed fish, invertebrates, and wild animals.
New Directions
Improving the world—especially from a long-term perspective—is fraught with extreme uncertainty. It’s entirely possible that our current efforts are misguided. For example, most of our grants so far have supported academic and independent research but we’d also like to support more practical and entrepreneurial projects.
We actively seek external input, including critiques of our current work, to refine our approach to reducing long-term risks. This may lead us to pursue different focus areas or projects.