I recently delivered a keynote lecture for a conference at the University of Zurich (it was a great event produced by terrific scholars, by the way) and I mentioned that I do not believe we will see AGI (as defined by any reasonable definition of that term) in the near future.
I don’t have any in principle objection to human-equivalent machines; I just don’t think we’re anywhere close. So a member of the audience asked me something like “what will all the people who’ve been promising AGI do if/when their predictions are not fulfilled?”
“They’ll do what they’ve always done,” I responded. “They’ll retcon their predictions and all the sudden whatever we do have is going to be exactly what they said we would have.” The master class in this is Ray Kurzweil. Now I met Mr. Kurzweil once, and I quite liked him. We had a nice conversation at the afterparty for the opening of his film Transcendent Man at the Tribeca Film Festival. I think he liked me too. He gave me his card and we exchanged an email or two afterward. So I’m not trying to be mean. I also think that he’s very smart. He is justly famous for his inventions. I once asked a roboticist why Kurzweil wrote The Age of Spiritual Machines and The Singularity is Near. He responded: “because Ray is the smartest guy in the world and he wants you to know it.” Fair enough!
But journalists, tech aficionados, and Silicon Valley types all label Mr. Kurzweil something like “the world’s most accurate futurist.” Now that may be true, but not the way he or they think. Or not to the extent anyway. Because Mr. Kurzweil has generally been correct about the low hanging fruit (e.g., a computer will beat the grandmaster at chess) but has always had to explain away all the more substantive claims (e.g., the iPhone is somehow the fulfillment of his prediction that computing will be woven into our clothes and we won’t have personal computers).

A terrific new example comes from his latest book: The Singularity is Nearer, which I hadn’t read when I was in Zurich (it’s been on my shelf for a couple of months as I work through other books), but which I started this week.
Kurzweil is known for his LAR: the law of accelerating returns. In The Age of Spiritual Machines (1999), Kurzweil says the LAR is “not a temporary methodology. It is a basic attribute of time and chaos.” He argues there that in any orderly system, the returns on order will exponentially increase. Hence accelerating returns.
As I quoted him in Apocalyptic AI, he says “the nature of time is that it inherently moves in an exponential fashion.”
But now the LAR is different (in fact, now it’s LOAR, at least in the index). In The Singularity is Near (2005), Kurzweil goes to great pains to provide graphs of how it’s happening across many domains. In that book, the time between “salient events” in the entire history of the universe either increases or decreases exponentially depending on whether it is an ordered system or not. In The Singularity is Nearer, however, the LAR applies to only a few domains. He writes: “In my 2005 book The Singularity is Near, I set forth my theory that convergent, exponential technological trends are leading to a transition that will be utterly transformative for humanity. There are several key areas of change that are continuing to accelerate simultaneously.” That implies others which are not. He says further that “my prediction of the technological Singularity does not suggest that rates of change will actually become infinite, as exponential growth does not imply infinite.” That’s the sort of thing that’s technically true (the growth would be bounded) but not really true in spirit (the doubling of absolutely, unimaginably colossal numbers every year or two looks more or less like infinite progress). Regarding the LAR, itself, Kurzweil writes now defines it as “information technologies like computing get exponentially cheaper because each advance makes it easier to design the next stage in their own evolution.” The LAR is no longer a universal law and the Singularity is, he says, a “metaphor.” All this in the first two pages of the introduction. Later he defines the LAR/LOAR as being “where information technology creates feedback loops of innovation.”
So in the case of the LAR, Kurzweil has retconned the entire methodology and concept! Forget redefining technological predictions (well, don’t forget that…we still need to do that); we need to redefine the entire scientific approach. And thus I have my own LAR: the law of AI retcons. We’ll call it LAR* to differentiate it from LAR. LAR* states that for all predictions (N) that declare inevitable technological futures and all scientific theories (S) that declare cosmic laws to justify N, there shall be a new scientific theory (S*) not equal to S but which looks rather like it and symmetrically provides justification for revised versions of N (N*) where N* = N minus all the less likely predictions. In keeping with the spirit of the times, the law of AI retcons (LAR*) can also be shortened to LAIR.
For a long time, I’ve been pointing toward the religiosity of mind uploading, the Singularity, and all that (again, not really as a criticism, just as an empirical fact), and the religious retcon goes back millennia before the term “retcon.”
In Christianity, which offers the prime example of this, St Paul claimed “we will not all die, but we will all be changed” (1 Cor 15:51). Of course, he was drawing on the predictions of a Jew: “this generation will not pass away until all these things have taken place” (Mark 13:30). For all that Jesus and his early followers predicted imminent changes to the world, as understood according to cosmic laws of redemption and purpose, the expected apocalypse never came.

Later Christians have followed in this trajectory. The Simpsons episode, “Thank God, It’s Doomsday” lampoons both the present-day Rapture theology and the belief that “the end is near,” and draws on the history of the Millerites and other Christian groups to do so. Under William Miller, a group of Christians believed that Jesus would return between late 1843 and early 1844. When that did not happen, members of the community reworked their math, revised their date, and then a striking number of people showed up to a hillside in upstate New York for the return of Jesus in October of 1844: The Great Disappointment. But while some Christians would partways with Millerite theology, others explored new opportunities. The Seventh Day Adventists would rethink what the predictions meant in the first place and declare them to have been accurate. Even non-Christians have taken this tack! According to Wikipedia, members of the Baha’i community believe that the Millerite predictions were fulfilled by the onset of their founder’s mission in 1843 and 1844.
So there’s nothing all that new about religious retcons, and the revised versions of the Singularity, the technical predictions, and even contours of cosmic law are basically to be predicted. 😉
Maybe I’m wrong and we’ll see human-equivalent machines in the next couple of years (in which case I owe a friend of mine a steak dinner), but I doubt it (in which case he owes me the dinner). I believe we’ll be muddling along with advancing technologies and arguing about how intelligent they are (and even whether they are conscious, but in the next few years I don’t believe there is any meaningful chance that they will be). Instead, I think Sam Altman to keep redefining the meaning of AGI “as we have traditionally understood it” so that he can seem right. And so that investors will keep boosting his value.
You can read more about it in my books, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality and Futures of Artificial Intelligence: Perspectives from India and the United States. And I’ve got a new one coming: Futureproofing Humanity.