I know, I’m in the minority: I don’t think that ChatGPT is all that great an invention. Not only are its bias guardrails ridiculous, such as with regard to religion, but it hallucinates and always will, and it fundamentally suffers from the fact that it’s just scraped off the Internet.
Let’s pretend for a moment that it can somehow overcome the obvious lack of creativity when it predicts words based on the word sequences it has already digested (hey, maybe the occasional hallucination is actually interesting!). So let’s pretend, then, that it does something vaguely like sharing interesting information rather than, as Neil Gaiman beautifully put it, “information shaped sentences.” Its complete and total lack of something like “understanding” (which Gary Marcus has been rightfully pointing toward since before there was ChatGPT) means that it can just keep reshuffling what’s presented to it…and much of what is presented is terrible.
One of Heidi Campbell‘s contributions to the study of religion and the Internet was her investigation into “instant experts.” Thanks to the ease with which people can post information to the Internet, just about anyone with web design chops (or even without much of that) could opine on religious matters in a way that was hard for many Internet users to disambiguate from the authorized voices.
Problem: the instant experts, whose expertise is often dubious, are now part of the predictive text mechanism.
Problem^2: the predictive text mechanism is now an instant expert.
I’ve argued with ChatGPT about its hallucinations. It ends up apologizing. Of course, it cannot explain why it made the error in the first place, it just keeps saying it’s a large language model and it’s sorry for the error. Hmm. Weak sauce from where I stand. The thing is, I can argue with ChatGPT about things I know. But most people would turn to ChatGPT precisely to get an answer to things they don’t know. It’s expertise cannot be questioned in such an environment. And thus does it say very stupid things (here’s a collection).
Some 15ish years ago I exchanged an email or two with Jaron Lanier after I read his “One Half A Manifesto.” It was a brief interchange about how I saw most narratives about AI as religious (when I was first publishing on Apocalyptic AI, but before my first book came out). He’s pretty firmly on board with this general position. I do regret that we never had more opportunity to speak about it, as I admire his thinking and his tech ethos. If we start thinking about ChatGPT the way he does (as some form of social connector and AI as a way to accommodate tech to people rather than the other way around) and if we dump the grandiose idea that we’ve built a new intellect, maybe we’ll find uses for it that I find interesting.