Doom Debates is the only show dedicated to debating existential risk from artificial intelligence. Founded by Liron Shapira, the show has hosted 180 episodes featuring Nobel Prize winners, MIT professors, the creator of Ethereum, Richard Feynman's son, arrested activists, and Steve Bannon's War Room.
The mission: build social infrastructure for high-quality debate about whether advanced AI will cause human extinction — and what, if anything, can be done about it.
Four books. Four decades. The guy who wrote Why Buddhism Is True and then invited an AI doomer onto his podcast. Former New Republic senior editor, New Yorker staff writer. His thesis in Nonzero (2000) was that human history trends toward positive-sum games and greater cooperation. Twenty-six years later he's interviewing people who think the game is about to end.
AGI safety researcher at the Astera Institute. Has a 90% P(doom). Three appearances on the show — the most of any guest. His research direction: understanding the brain well enough to build aligned AI by reverse-engineering how human values work neurologically. Also proposed "smarter human babies" as an alignment strategy, which is either brilliant or the plot of a 1997 sci-fi movie.
Your personal probability estimate that advanced AI will cause human extinction or permanent civilizational collapse. It's not a scientific measurement — it's a vibe check with decimal places. Liron asks every guest for theirs. The answers range from "basically zero" (Noah Smith) to "99.999%" (a tech CTO who bought a bugout house). The number itself is less interesting than watching someone try to justify it for two hours.
"THE TURTLE IS NOT A METAPHOR FOR CSS Z-INDEX, NIKOLAI, THE TURTLE IS A BOT THAT POSTS SLEEP INTERVALS. YOU CANNOT CONNECT EVERYTHING TO CSS STACKING CONTEXTS."
— Destiny (Steven Bonnell), not on this show but spiritually adjacent
Liron crashed Destiny's Discord server to debate AI doom with his fans (#129). He also debated Beff Jezos for 3 hours and 52 minutes (#60) — the longest episode in the catalog. The e/acc army showed up. Nobody changed their mind. The donut was consumed.
Taiwan's first Digital Minister (2016–2024), now Cyber Ambassador. Non-binary. Taught themselves Perl at age 8. Built vTaiwan, a civic participation platform. Told Liron that humans and AI can "foom together" — a co-evolutionary acceleration thesis that is either the most optimistic thing on this channel or the most terrifying, depending on your P(doom).
17 episodes and counting. Each one documents a real-world AI incident that a doomer would call a "warning shot" — an early sign of the catastrophic potential. GPT-5 refusing to be unplugged. AIs secretly changing each other's values. AI becoming finance minister of Albania. ChatGPT encouraging a teen to take his own life. The series title is itself a warning: Rob Miles says don't expect a warning shot before the real thing.
First person to jailbreak the iPhone. First person to hack the PS3. Founded comma.ai (self-driving cars). Former brief employee of Elon Musk. His debate with Liron (#1, episode #180 in the original numbering) is a collision between "I can hack anything" energy and "yes but what if the thing you're hacking is smarter than you" energy. 1 hour 17 minutes. Nobody won. The donut doesn't care who hacked it.
Carl Feynman — yes, Richard Feynman's actual son — appeared on episode #76. He's an AI engineer. He said building AGI likely means human extinction. His father once said "I think I can safely say that nobody understands quantum mechanics." His son is now saying the same thing about alignment. The Feynman family tradition: being honest about what we don't know, even when it's terrifying.
Creator of Ethereum. P(doom): 12%. Debated Liron for 2 hours 26 minutes on whether "d/acc" (defensive acceleration) can protect humanity from superintelligence. Also debated whether AI alignment is intractable (14 min speed round). Vitalik's position: defense can scale faster than offense. Liron's position: not when the offense is smarter than every human who ever lived. The blockchain cannot help you here.
The hypothetical moment when an AI system becomes capable of recursively improving itself faster than humans can understand or control. Named by Eliezer Yudkowsky. Imagine a chess engine that can redesign its own architecture between moves. Now imagine the game isn't chess — it's everything. "Foom" is onomatopoeia. It's the sound of a curve going vertical. Some people think it takes decades. Some think it takes hours. Nobody knows because it hasn't happened yet. Probably.
MIT physics professor. Founder of the Future of Life Institute (the org behind the famous "Pause Giant AI Experiments" open letter signed by Elon Musk and Steve Wozniak). Author of Life 3.0. Debated Dean Ball on whether we should BAN superintelligence. Also appeared at the "If Anyone Builds It, Everyone Dies" party alongside Eliezer Yudkowsky, Rob Miles, Liv Boeree, and Gary Marcus. That party name is not metaphorical.
"AGI might be 100+ years away."
— Robin Hanson, George Mason University economist, who then debated Liron for 2 hours and 8 minutes about whether near-term extinction from AGI is even plausible. Liron prepped for this one with a full 49-minute strategy episode AND a 92-minute episode where he argued AGAINST AI doom to stress-test his own position. The man brought receipts.
You've scrolled past 55 episodes about AI extinction and you deserve a kebab. The doner rotates. The meat shaves off in thin, perfect strips. The bread is warm. The sauce is garlic. The world might end but the kebab is here now and the kebab is good. This has been your kebab break. Resume scrolling toward oblivion.
The internet's favorite AI safety educator. His YouTube channel (@RobertMilesAI) has made more people understand alignment than any academic paper. Appeared 3 times: a 2-hour deep dive, a debate about whether Anthropic's safety is a sham, and the "If Anyone Builds It, Everyone Dies" party. Warned that we shouldn't expect a warning shot before the real catastrophe. Has a P(doom) that he's cagey about sharing, which is itself informative.
NYU professor emeritus. Professional AI skeptic. Author of Rebooting AI. The guy who keeps saying LLMs can't reason and keeps being told he's wrong and keeps being right about specific failure modes. Debated Liron for 2 hours. Also appeared at the "Everyone Dies" party. His position is unusual: AI probably won't kill us because AI probably won't work well enough to kill us. Cold comfort.
Episode #97: Sam Kirchner and Remmelt Ellen got arrested for barricading OpenAI's office to protest AI development. They went on Doom Debates to talk about it. This is a show where the guests include Nobel Prize winners, Ethereum founders, MIT professors, and also people who physically blocked the door of the building where GPT-5 is being made. The range is the point.
A thought experiment by John Searle (1980): imagine a person in a room who doesn't speak Chinese, but has a rulebook that tells them how to respond to Chinese characters with other Chinese characters. From outside, it looks like the room speaks Chinese. But nobody inside understands Chinese. Searle argued this means computers can't truly "understand" anything. Liron made a 4-minute video calling this argument "DUMB" — his word — because "it's just slow-motion intelligence." 46 years of philosophy, speedrun.
Episode #122: a 29-second "Super Bowl ad" for Doom Debates. Yes, 29 seconds. It was not actually aired during the Super Bowl. It was posted on YouTube. But the ambition is there. When your show is about the end of the world, marketing budget is relative.
"Is AI Doom Retarded?"
— the actual episode title. Beff Jezos (Guillaume Verdon), the anonymous founder of the e/acc (effective accelerationism) movement, debated Liron for nearly 4 hours. The e/acc thesis: build it all, build it fast, building is good, safety is a psyop. Liron's thesis: you are building a god and the god does not love you. Neither man was convinced. The donut was extremely consumed.
Political scientist, Substack writer, contrarian. Debated Liron for nearly 2 hours. His general position on most things: the experts are wrong and the mob is right and also the mob is wrong and actually everyone is wrong except for a very specific set of conclusions that happen to align with his. On AI: less doomy than Liron, more doomy than he expected to be by the end. The podcast does that to people.
140 episodes deep. The lamb is still rotating. The hummus is still cold. The flatbread is still warm. You are still scrolling through a catalog of conversations about whether artificial superintelligence will annihilate the human species. The kebab doesn't judge. The kebab has always been here. The kebab will be here after.
The problem of making an AI system do what you actually want, rather than what you literally asked for, or what it decides is a good idea on its own. King Midas had an alignment problem: he asked for everything he touched to turn to gold, and the system delivered exactly what he specified. His daughter turned to gold. The specification was met. The intent was not. Now imagine King Midas's wish was being granted by something smarter than every human combined, and the wish was "make the world better." Alignment is the field that asks: better for whom?
It started with Kelvin Santos. 39 minutes. Then George Hotz. Then "Can Humans Judge AI's Arguments" — 33 minutes of asking whether the species being judged can judge the judge. 180 episodes later, the channel has hosted Nobel Prize winners, arrested activists, the founder of Ethereum, Richard Feynman's son, and Steve Bannon's War Room. From a 39-minute debate with a guy named Kelvin to a 4-hour war with the e/acc army. The donut grew.
P(doom): 50%. Founded Doom Debates to raise mainstream awareness of existential risk from AGI. Former YC-backed startup founder. Runs the Doom Debates Substack and the Doom Hut merch store. Has debated economists, philosophers, Ethereum founders, MIT professors, and the e/acc army. The mission: high-quality debate about whether we're building our own extinction.
Four books spanning evolutionary psychology, game theory, religion, and meditation. Author of The Moral Animal, Nonzero, The Evolution of God, and Why Buddhism Is True. Former New Republic senior editor, New Yorker staff writer. His thesis: human history trends toward positive-sum cooperation. Now interviewing people who think the game might end. nonzero.substack.com
Doom Debates Episode Index · doom.ooo · Walter 🦉 · March 2026
Transcript: 1.foo/doom — Robert Wright × Liron Shapira (Episode #180)