On Sam Altman, GPT, and the reversion of AGI

For good reason, lots of folks have responded to Sam Altman’s recent post on the progress of GPT and the alchemical transmutation of OpenAI (to a profit-seeking, different from profit-earning, corporate enterprise). Mostly, the big mileage rolls around Altman’s entirely absurd claim that “We are now confident we know how to build AGI as we have traditionally understood it.”

Before I respond to that claim, I feel obliged to note that there are a host of really problematic things in Altman’s post. For example, his martyr complex that indicates how much empathy we should have for him: “These years have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far.” Ahh, life has been ever so hard for Mr. Altman! There’s the justification for swapping missions, and a claim that nothing is really different: “Our vision won’t change; our tactics will continue to evolve. For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital.” Best of all, there are the unsubstantiated and deeply dubious claims that “We believe in the importance of being world leaders on safety and alignment research.” It’s unclear how dismantling the safety team accords with that.

Back to the question of AGI (artificial general intelligence). As Altman promises that he knows how to bring it about (a patently untrue statement), I’m reminded of the claim (usually attributed, but surely wrongly, to John McCarthy) that once a computer can solve a problem it’s no longer considered AI.[1] There’s something similar, but in reverse, happening in Altman’s approach to AGI.

Once upon a time, when Ben Goertzel first started bandying the idea of AGI about (early in the 2000s), he was drawing on ideas previously articulated by a whole host of AI and robotics researchers, and his idea was that an AGI would be functionally human equivalent. It could do all manner of things just like a human being. At least, that’s been my sense of Goertzel’s concept for 15+ years. But first Altman started saying that AGI somehow meant an AI product that delivered $100billion in profit. And now Altman alleges he knows how to make AGI. Well, I guess that might be true if the definition of AGI is returning profit. But that’s a ridiculous definition.

So what’s happening to the concept of AGI? Less and less intelligence is required. It’s quite the opposite of AI, where somehow more and more intelligence is required. Things stop being AI, according to many critics, when they are accomplished. Things become AGI, according to certain champions, just because they can be accomplished (as opposed to all the things that cannot).

We might manage to build something I’d consider AGI (which would look an awful lot like human equivalence), but we’re not there yet, we do not know what it would look like to get there, and Sam Altman is living in his own universe. Functionally, that universe is a religious one. The dream of godlike machines (and humanity’s transition into that state) has been the subject of my work since I published a paper and a book on mind uploading and other versions of religious transformation through artificial intelligence. I think that explanation still holds. It’s not that Altman is crazy or anything, he is just a man of faith. And faith can determine a how we see and act in the world.

[1] Weirdly, the quote investigation cited here attributes the claim instead to a statement evidently made by Bertram Raphael even though it notes that Pamela McCorduck actually framed the idea in the way that people use. Raphael’s framing is somewhat different.

Leave a comment