Right Now in The End of the World

An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.) Researchers have been working seriously on creating human-level intelligence in machines since at least the 1940s, and starting around 2006 that wild dream became truly feasible. Around that year, machine learning took a huge leap forward with the advent of artificial neural nets, algorithms that are not only capable of learning, but can also learn on their own. The rise of neural nets signals a big and sudden move down a dangerous path: machines that can learn on their own may also learn to improve themselves. And when a machine can improve itself, it can rewrite its code, make improvements to its structure – and get better at getting better. At some point, a self-improving machine will surpass the level of human intelligence - becoming superintelligent. At this point, it will become capable of taking over everything from our cellular networks to the global internet infrastructure. And it’s about here that the existential risk that artificial intelligence poses to humanity comes in. We have no reason to believe that a machine we create will be friendly toward us, or even consider us at all. A superintelligent machine in control of the world we’d built and with no capacity to empathize with humans could lead directly to our extinction in all manner of creative ways, from repurposing our atoms into new materials for its expanding network, to plunging us into a resource conflict we would surely lose. There are some people working to head off catastrophe-by-AI, but with each new algorithm we release that is capable of improving itself, a new possible future existential threat is set loose. Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.

EP04: Natural Risks

Humans have faced existential risks since our species was born. Because we are Earthbound, what happens to Earth happens to us. Josh points out that there’s a lot that can happen to Earth - like gamma ray bursts, supernovae, and runaway greenhouse effect. (Original score by Point Lobo.) Because humanity is an Earthbound species – we have no way to get ourselves off of Earth and live elsewhere in the universe quite yet – if something terrible happens to Earth, it happens to us as well. Because Earth has a long history of terrible ]t would be in our best interests to get busy on working to get ourselves off of Earth as soon as possible and become a spacefaring species because Earth has a long history of suffering terrible, life-erasing events. There have been at least five mass extinctions in Earth’s history, from the one brought on by an asteroid that killed off the dinosaurs and about three quarters of the species on Earth to another one hundreds of millions of years earlier that was triggered by a gamma ray burst, when a meager four percent of all life on Earth managed to hang on. What are the mechanics behind catastrophes on these scales and what impacts would they have on humanity? In this episode, Josh looks at what are called natural existential risks and explores what some of them would be like for humans and Earth itself. Interviewees: Robin Hanson, George Mason University economist (creator of the Great Filter hypothesis); Ian O’Neill, astrophysicist and science writer; Toby Ord, Oxford University philosopher.

EP03: X Risks

Humanity could have a future billions of years long – or we might not make it past the next century. If we have a trip through the Great Filter ahead of us, then we appear to be entering it now. It looks like existential risks will be our filter. (Original score by Point Lobo.) Humanity could have an extremely long future ahead of it, potentially stretching billions of years and spreading out across the universe. All of those planets and stars and energy out there could be used for amazing projects. We might digitize human consciousness and upload ourselves onto servers where we can simulate paradise. We might end scarcity, and every person alive will have everything they could ever want. It’s basically out of our ken to imagine what future humans will come up with, but we can imagine it will be pretty great to be alive in the eons ahead. The thing is, those humans to come are depending on us for that future. Those of us alive today are entering what may be the most dangerous period in the entire span – past or future – of the human race. We are beginning to face existential risks – threats to the very existence of our species, threats that are big enough to actually drive humanity to extinction. And if we go extinct, not only do we die, but that whole bright future and all the quadrillion humans to come will be lost forever too. Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Robin Hanson, George Mason University economist (creator of the Great Filter hypothesis); Toby Ord, Oxford University philosopher; Sebastian Farquahar, Oxford University philosopher.

EP02: Great Filter

The Great Filter hypothesis says we’re alone in the universe because the process of evolution contains some filter that prevents life from spreading into the universe. Have we passed it or is it in our future? Humanity’s survival may depend on the answer. (Original score by Point Lobo.) Maybe the reason we seem to be alone in the universe is because we truly are. But this answer to the Fermi paradox only raises more questions: Why us? Why should we be the only intelligent life in a universe 50 billion light years across? Perhaps it turns out that the evolution of intelligent life is actually really, really hard. The Great Filter hypothesis supposes that the universe seems to be empty because there is some do-or-die step between the point where life emerges and where an intelligent civilization spreads out into the universe (where we would notice them) that is so impossible it has thus far killed off every life that’s emerged across the entire universe and throughout all of time. If that’s true, then the reason why humans are the only intelligent life in the universe is because we’re the only life to have made it past that impossible step, the only life to have made it through the Great Filter. And if that’s the case, then we humans have a bright, long future ahead of us and the entire universe is there for the taking to be used in any ways we can dream up. But there’s a catch to the Great Filter hypothesis – that impossible step could also lay ahead in our future. We humans are not quite at that last point where we’ve begun to spread out from Earth and colonize the galaxy. And if there’s something that has killed off every other civilization just before they could spread out from their home planet, then we still face that same impossible trip through the Great Filter too. And if no other life in the more than 13 billion-year history of the universe has made it through, the odds are not in our favor that we will. Interviewees: Robin Hanson, George Mason University economist (creator of the Great Filter hypothesis); Toby Ord, Oxford University philosopher; Donald Brownlee, University of Washington astrobiologist (co-creator of the Rare Earth hypothesis); Phoebe Cohen, Williams College paleontologist.

EP01: Fermi Paradox

Ever wondered where all the aliens are? It’s actually very weird that, as big and old as the universe is, we seem to be the only intelligent life. In this episode, Josh examines the Fermi paradox, and what it says about humanity’s place in the universe. (Original score by Point Lobo.) There’s a concept called the Fermi paradox that asks where all the aliens are. Because the universe is so amazingly old and so astoundingly vast, intelligent life should have evolved perhaps trillions of times by now. At this point, the universe should be so teeming with intelligent civilizations that Earth should be fully colonized by aliens. In other words, we should be as sure that there is other intelligent life in the universe as we’re sure there are people living in Denmark. And yet, all of our searches for signs of intelligence have come up empty handed – we appear to be the only intelligent life in the whole wide universe. This is very weird. A rainbow of explanations have been developed over the years to make sense of the cosmic emptiness we see: from the idea that we are being kept in a zoo to the suggestion that the aliens have gone post-biological – loaded themselves into digital formats – and are hibernating until the universe cools down so the computers that run them can process information efficiently. But as far out as some answers to the Fermi paradox are, perhaps the strangest one of all is that we really are alone. Interviewees: Anders Sandberg, Oxford University philosopher and co-creator of the Aestivation hypothesis; Seth Shostak, director of SETI; Toby Ord, Oxford University philosopher.

Trailer 2: Bill, Elon and Stephen

Why are smart people warning us about artificial intelligence? As machines grow smarter and able to improve themselves, we run the risk of them developing beyond our control. But AI is just one of the existential risks emerging in our future.

Trailer: The End Of The World with Josh Clark

We humans could have a bright future ahead of us that lasts billions of years. But we have to survive the next 200 years first.

The End Of The World is a 10-episode deep dive by podcast pioneer Josh Clark into the world of existential risks, where breathtaking future tech and science put humanity on the razor’s edge between a future that could last billions of years and abrupt extinction.