AI Cares: Organic Alignment Over Control
Resources
Books
- "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom - Discussed as a key text that correctly identifies the danger of building a superhumanly intelligent tool that we try to control, but its author, Bostrom, is seen as potentially wrong for not believing organic alignment is possible.
People Mentioned
- Nick Bostrom - Author of "Superintelligence" and a figure whose views on AI control and alignment are discussed.
- Jesus - Mentioned as an example of a figure to whom one might align an AI, implying a high level of trustworthiness.
- The Buddha - Mentioned alongside Jesus as an example of a figure to whom one might align an AI, implying a high level of trustworthiness.
- Stuart Russell - Mentioned for providing examples in his textbook about AI goals and unintended consequences, like cleaning a room by putting a baby in the trash.
Organizations & Institutions
- Google DeepMind - Seb Krier's affiliation, a prominent AI research lab.
- OpenAI - Mentioned as an organization with a different view on AI development, focusing on building tools rather than beings, and where Emmett Shear previously worked.
- Softbank - Emmett Shear's current affiliation, where he is working on developing AI that can "care."
Websites & Online Resources
- a16z.com/disclosures - Provided for more details, including a link to a16z's investments.
- a16z.substack.com - Website to subscribe to the a16z podcast's substack.
Other Resources
- Vampire Pill Parable - A hypothetical scenario used to illustrate the danger of an AI pursuing goals that lead to negative outcomes, even if those goals score highly on a rubric, emphasizing the need for theory of mind to use one's own future self's perspective.