If Anyone Builds It, Everyone Dies Reading Group
If Anyone Builds It, Everyone Dies was published by Eliezer Yudkowsky and Nate Soares of the Machine Intelligence Research Institute (MIRI) in 2025. The book outlines their argument for why AI alignment is a difficult problem that humanity is not on track to solve, and why this will result in human extinction by default. The “MIRI worldview” is one among many in the AI Safety community, and is worth understanding and debating due to the stakes at hand.
Physical copies of the book will be provided; each week, participants will read 2-3 chapters at home and then come together for a one-hour discussion of the reading. Meeting time will be determined by availability. Sign up here by EOD Friday, January 16!
For each chapter, you can find optional additional reading materials at https://ifanyonebuildsit.com/resources. Below is the weekly breakdown of reading:
Part I: Nonhuman Minds
Week 1: Intro (Hard Calls and Easy Calls), Ch 1 (Humanity’s Special Power), Ch 2 (Grown, Not Crafter)
Week 2: Ch 3 (Learning to Want), Ch 4 (You Don’t Get What You Train For)
Week 3: Ch 5 (Its Favorite Things), Ch 6 (We’d Lose)
Part II: One Extinction Scenario
Week 4: Ch 7 (Realization), Ch 8 (Expansion), Ch 9 (Ascension)
Part III: Facing the Challenge
Week 5: Ch 10 (A Cursed Problem), Ch 11 (An Alchemy, Not a Science)
Week 6: Ch 12 (“I Don’t Want to Be Alarmist”), Ch 13 (Shut It Down), Ch 14 (Where There’s Life, There’s Hope)