Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’
Key takeaways
- STEVEN LEVY: Deep Utopia is more optimistic than your previous book.
- I am very excited about the potential for radically improving human life and unlocking possibilities for our civilization.
- You wrote a paper with a striking argument: Since we’re all going to die anyway, the worst that can happen with AI is that we die sooner.
Why this matters: a development in AI with implications for how people work, create, and decide.
Photo-Illustration: WIRED Staff; Stephen Mc Carthy/Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Philosopher Nick Bostrom recently posted a paper, where he postulated that a small chance of AI annihilating all humans might be worth the risk, because advanced AI might relieve humanity of “its universal death sentence.” That upbeat gamble is quite a leap from his previous dark musings on AI, which made him a doomer godfather. His 2014 book Superintelligence was an early examination of AI’s existential risk. One memorable thought experiment: An AI tasked with making paper clips winds up destroying humanity because all those resource-needy people are an impediment to paper clip production. His more recent book, Deep Utopia, reflects a shift in his focus. Bostrom, who leads Oxford’s Future of Humanity Institute, dwells on the “solved world” that comes if we get AI right.
STEVEN LEVY: Deep Utopia is more optimistic than your previous book. What changed for you?
NICK BOSTROM: I call myself a fretful optimist. I am very excited about the potential for radically improving human life and unlocking possibilities for our civilization. That’s consistent with the real possibility of things going wrong.