ChatGPT, LLMs, and QAnon

So we’re all seeing lots of articles about GPT and other large language models leading people into delusional spirals based on “sycophantic” responses, outright fabrications, and manipulative framings. There’s a great piece in The New York Times today.

Reading through Allan Brooks’s story, I found myself surprised that no article I’ve read yet has pointed toward the similarity of the GPT nonsense with QAnon nonsense. The structure of the responses, which point toward Mr. Brooks discovering some shocking truth, a truth being denied and hidden by other forces, and the power of his own outsider view, all resemble the delusions of pizzagate, deep state, and other QAnon fabrications.

So I’m wondering how much QAnon trash went into the training data?

I’m also reminded of why it’s important to study religion. Understanding religious dynamics (both personal and social) are relevant to how we think through conspiracy theories and human belief practices. The failure of imagination at stake in AI design (i.e., the failure to both predict the current problems and respond to them intelligently) is, in part, a failure of mindset, a failure to recognize how humanity lives and breathes. The study of religion might actually have been relevant to AI developers when they leapt into the massive deployment of their AI models.

Leave a comment