Artificial General Intelligence – toward a (re?)definition?

Recently, a crew of scholars published an editorial in Nature alleging that contemporary large language models (LLMs) have attained the vaunted status of artificial general intelligence (AGI). Giulio Prisco, a former space scientist and longtime influential figure in transhumanism, offers an effective summary and an interesting reaction on his own webpage. One of the things that interests me about his reflections is how he separates the question of “intelligence” from “conscious.” That is, even if the LLM is not conscious, could it be intelligent? My friend Steve Kaplan and I engaged a similar idea in our paper on Hinduism and AI precisely because in Indic thought consciousness and intelligence are simply not the same thing. I tend to favor a realist position that if everyone reasonable treats it like it’s conscious then we may as well go with that. Steve pointed toward the metaphysical distinctions in the terminology and would probably stick to his guns regardless of whether or not everyone thinks a machine is conscious.

Anyway, the Nature publication is part of a general struggle to clearly define what AGI might mean. Sam Altman famously (and foolishly) asserted on his blog a year ago that “we are now confident we know how to build AGI as we have traditionally understood it.” I’m not sure which “we” has traditionally understood AGI in a way that would make GPT fit the bill, but apparently the Nature crew agrees with him. I definitely don’t. The authors believe hat a machine might be intelligent because it can work through advanced proofs even though it cannot count the words in a sentence. Now I understand that no definition of intelligence, itself, enjoys a broad consensus. So that makes defining AGI rather hopeless from the get-go. But however you want to figure on intelligence, I’m not buying AGI when it cannot consistently count the vowels in a sentence or label the parts of a bicycle. After all, AGI used to mean something like “human equivalent.” Well, we aren’t there.

Does that mean we’ll never attain AGI? Absolutely not. I remain agnostic on that subject, though I believe it’s possible we will indeed end up with machines people consider intelligent, alive, and consciousness. That’s why I write about how we should approach that in my book, Futureproofing Humanity.

Leave a comment