Center for Mind, Brain, and Consciousness (NYU)
Center for Mind, Brain, and Consciousness (NYU)
The NYU Center for Mind, Brain, and Consciousness, directed by David Chalmers and Ned Block, is devoted to foundational issues in the mind-brain sciences. We have supported their work on AI consciousness and welfare.
Visit website →
Center for Mind, Ethics, and Policy (NYU)
Center for Mind, Ethics, and Policy (NYU)
The NYU Center for Mind, Ethics, and Policy is dedicated to advancing understanding of the consciousness, sentience, sapience, and moral, legal, and political status of nonhumans, including animals and AI systems.
Visit website →
Center on Long-Term Risk
Center on Long-Term Risk
The Center on Long-Term Risk conducts research aimed at ensuring that emerging technologies such as artificial intelligence do not risk causing suffering on an unprecedented scale.
Visit website →
ChinaTalk
ChinaTalk
ChinaTalk provides analysis on China, technology, and US-China relations through newsletters and podcasts that reach key decision-makers in government and industry.
Visit website →
Consortium for Digital Sentience Research and Applied Work (Longview Philanthropy)
Consortium for Digital Sentience Research and Applied Work
We have supported Longview Philanthropy’s Consortium for Digital Sentience Research and Applied Work.
Visit website →
Cooperative AI Foundation
Cooperative AI Foundation
We helped seed the Cooperative AI Foundation through a $15,000,000 commitment. Their mission is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all.
Visit website →
Eleos AI
Eleos AI
Eleos AI conducts research on the potential sentience and wellbeing of AI systems, aiming to inform policy, guide labs, and grow the field of AI moral patienthood.
Visit website →
Foundations of Cooperative AI Lab (CMU)
Foundations of Cooperative AI Lab (CMU)
We made a $3,000,000 commitment to Carnegie Mellon University to establish the Foundations of Cooperative AI Lab headed by Vincent Conitzer. The lab’s research agenda is on creating foundations of game theory appropriate for advanced, autonomous AI agents – with a focus on achieving cooperation.
Visit website →
Institute for Law & AI
Institute for Law & AI
The Institute for Law & AI (LawAI) is an independent think tank that researches and advises on the legal challenges posed by artificial intelligence.
Visit website →
Request for Proposals on Hardware-Enabled Mechanisms (Longview Philanthropy)
Request for Proposals on Hardware-Enabled Mechanisms
We have supported Longview Philanthropy’s request for proposals for work on hardware-enabled mechanisms for AI verification.
Visit website →
The Alliance for Secure AI
The Alliance for Secure AI
The Alliance for Secure AI spreads the message about the serious challenges advanced AI poses and offers ideas for solutions.
Visit website →