Researchers have been working seriously on creating human-level intelligence in machines since at least the 1940s, and starting around 2006 that wild dream became truly feasible. Around that year, machine learning took a huge leap forward with the advent of artificial neural nets, algorithms that are not only capable of learning, but can also learn on their own. The rise of neural nets signals a big and sudden move down a dangerous path: machines that can learn on their own may also learn to improve themselves. And when a machine can improve itself, it can rewrite its code, make improvements to its structure – and get better at getting better. At some point, a self-improving machine will surpass the level of human intelligence - becoming superintelligent. At this point, it will become capable of taking over everything from our cellular networks to the global internet infrastructure. And it’s about here that the existential risk that artificial intelligence poses to humanity comes in. We have no reason to believe that a machine we create will be friendly toward us, or even consider us at all. A superintelligent machine in control of the world we’d built and with no capacity to empathize with humans could lead directly to our extinction in all manner of creative ways, from repurposing our atoms into new materials for its expanding network, to plunging us into a resource conflict we would surely lose. There are some people working to head off catastrophe-by-AI, but with each new algorithm we release that is capable of improving itself, a new possible future existential threat is set loose. (Original score by Point Lobo www.pointlobo.com.)
- Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute
- David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+)
- Sebastian Farquahar, Oxford University philosopher
Get Artificial Intelligence on a shirt, mug, sticker and more!