Thanks to the help of Elon Musk, Stephen Hawking, and others, we’re seeing an increasingly public conversation about Apocalyptic AI (the belief that machines will become transcendent through rapidly accelerating technological progress and we will evolve into machines ourselves through cyborg technologies and/or mind uploading). Of course, Musk and Hawking earned public attention through fear-mongering and others have found their position overwrought.
I first published about Apocalyptic AI in some essays over a decade ago and then a book in 2010; but my own research typically avoids engagement with the truth-value of Apocalyptic AI claims along with the moral value of those claims. Instead, I’ve tried to articulate the religious logic of Apocalyptic AI, showing that authors like Hans Moravec, Ray Kurzweil and others provide the structures for a new religious movement.
The recent enthusiasm (positive or negative) for Apocalyptic AI has provoked widespread consideration of human-machine futures, from transcendence to cataclysm, and this enthusiasm has in turn put me in conversation with some really wonderful researchers around the world. In April, I was invited to give a keynote lecture at a conference hosted by the Centre for the Critical Study of Apocalyptic and Millenarian Movements (CENSAMM) and, in May, I joined two other speakers for the Howard M. Garfield Undergraduate forum at Stanford. I’m working with the Panacea Trust Project Director, Simon Robinson, to bring several papers from the CENSAMM conference to academic notice. But you can see videos of the talks at their website. The Garfield Forum, in which I got to speak along with Sylvester Johnson of Virginia Tech and Jerry Kaplan of Stanford can be viewed here. There are lots of interesting folks speaking in those events and they’re well worth hearing. So if you’re interested in Apocalyptic AI, take a look at those sites.