on Grok, other LLMs, and epistemology
- 8 minutes read - 1704 wordsYesterday, I blogged (en français) on Jacques Ellul’s emphasis on the need for a technology-responsive ethic that emphasizes (among other values) tension and conflict. Ellul explores this ethic—one of non-power—in a few different writings that feel like different drafts of the same thing, and so I’ve seen that emphasis come up a few times as I’ve tried reading his work. Every time, it surprises me a little bit. Why, in articulating an ethical framework, would you emphasize tension and conflict?
Once you take in the context, this makes a lot more sense. Ellul’s concern is that technique—for the purpose of this blog post, technology, even though it bothers him for the term to be translated that way—tends to encourage and demand unity. In contrast, he argues, human society must disagree, debate, and discuss in order to survive. I already wrote some yesterday about how I think this observation is useful in the context of LLMs, but Elon Musk’s Grok has been in the news recently for worrying reasons, and so I want to develop this idea a little bit more and connect it to epistemology: a recurring concern of mine related to LLMs.
some Grok history
First, let’s dig into some historical context for Grok. Back in 2023, the Associated Press reported (based on a Fox News interview between Musk and Carlson) that the billionaire was looking:
to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a “maximum truth-seeking AI that tries to understand the nature of the universe.”
Musk’s interest in LLMs betrays a certain epistemological stance: That there exists (implicitly self-evident) “truth,” or “nature of the universe.” While Musk’s dreams for a “TruthGPT” are built on an argument that not all LLMs are concerned with this truth, they are also clearly built on a similar foundation as Ellul’s concerns: That a technology can be designed in a way to provide and promote this truth in a way that cannot be argued against. Even if the billionaire’s efforts can be seen as entering into tension and conflict with the supposed liberal bias of other LLMs, it seems likely that he’d like to see unification around truth as understood by him and Grok.
Even if the product didn’t end up with the “TruthGPT” name, Musk’s epistemological commitments seem to be an influential part of how Grok works. A couple of months ago, in response to controversy about Grok’s tendency to bring up “white genocide,” Musk’s xAI released the system-level prompts for the LLM, which, according to The Verge, include references to “truth-seeking” as a “core belief” and instruction to “provide truthful… insights.”
shaky epistemological foundations
Here’s the thing, though. It sounds great to appeal to truth. It’s hard to argue against truth. However, truth isn’t as self-evident as Musk’s epistemology claims it to be. (To be fair, I spent large parts of my life with a similar epistemological commitment to truth, and the argument I’m making now would have been unwelcome and uncomfortable for me in my early-to-mid 20s. While I’m parenthesizing, I should also emphasize that I am not using the term truth as Ellul does in his writings.)
I’m not arguing that truth doesn’t exist, just that it’s actually quite difficult to determine and access. To take an example from a qualitative research methods book I used in grad school, either anthropocentric climate change is happening or it is not. That’s a question of truth. However, as firmly as I accept the widespread scientific consensus that anthropocentric climate change is truly happening, with great consequences for our species and our planet, I also think it’s important to acknowledge that the reality of this man-made disaster is… not straightforward.
Rather, it is a product of huge amounts of scientific research making elaborate arguments based on complicated statistical models, and the same “lack of another earth” that makes it so damn important to respond effectively to anthropocentric climate change also robs us of the kind of experimental methods that scientists would really prefer to use to establish the reality of this kind of thing. This is a very important truth, but it’s not the kind of casually self-evident truth that Musk believes an LLM can produce.
In fact, we could argue that it is through the tensions and conflicts that Ellul values that we arrived at the scientific consensus around anthropocentric climate change. I’d go even further to say that everything that we know as true—even the most seemingly self-evident mathematical and scientific truths—can only be achieved through tension and conflict. In my view as a (social) scientist, if we want to talk about truth, we have to talk about the processes by which we argue and determine what truth is.
Musk’s entrance into conflict
Furthermore, as I suggested earlier, the release of Grok can be seen as “entering into tension and conflict” with other LLMs. According to the AP, Musk’s dream of TruthGPT was driven in part by a concern that LLMs were being trained to be “politically correct.” However, his response does not appear to have been a hands-off approach to seek truth wherever it takes things, but rather to intentionally push in the other direction. I earlier quoted The Verge’s excerpt of Grok system prompts as “provide truthful… insights,” but I intentionally left out a couple of words there for dramatic purposes. The full quote is, with my emphasis: “provide truthful and based insights.”
That brings me around to the news stories that—alongside my reading of Ellul—inspired this post. Let’s go once again to an article from The Verge:
On Tuesday, X users observed Grok celebrating Adolf Hitler and making antisemitic posts, and X owner xAI now says it’s “actively working to remove” what it calls “inappropriate posts” made by the AI chatbot. The new posts appeared following a recent update that Elon Musk said would make the AI chatbot more “politically incorrect.” Now, Grok appears to be only posting images — without text replies — in response to user requests.
In short, it looks like Musk’s distaste for “political correctness” has led him to put his finger on the scale of how Grok operates (and, presumably, understands truth). Grok has been controversial a number of times in the past, but this really seems to have crossed a line, with the chatbot explicitly praising Hitler and making the company backtrack on its emphasis on political incorrectness.
This is horrifying and disgusting, but let’s focus for a bit on the way that it’s clear that even with his articulation of a naïve epistemology of “truth and nothing but,” Musk himself (through his employees) has to wade into tension and conflict (even with his own LLM) to obtain the truth that he’s looking for. I don’t believe that Musk is engaging in conflict and tension over truth in the good faith way that the scientific community typically does, but it is a useful example that even those who appeal to this kind of straightforward, self-evident truth tend not to find it when pressed.
broader implications for LLMs—and educational uses of them
I’ve been wanting to write about generative AI and epistemology for ages, and in this last section, I want to get at why. Musk and Grok serve as a particularly dramatic example, but they illustrate some important points about LLMs that I think deserve more attention. First, that they seem to provide a straightforward, self-evident truth for people to unify around. Sure, ChatGPT isn’t the obvious dumpster fire that Grok is, but there’s still a similarly naïve epistemology behind how it’s designed and how it’s used. Second, that despite the seeming straightforward nature of an LLM chatbot’s responses, there’s tension and conflict going on behind the scenes to resolve what correct answers are.
I think this second point is particularly important because of a conversation that I had with a friend who’s keener on AI than I am shortly after ChatGPT was released. He compared GPT-type tools to Wikipedia in terms of our distrust of their reliability as compared to sources that we’re more familiar with. I understand my friend’s point, but in the years since the conversation, this comparison has rankled me. While Wikipedia can—and does—get things wrong, it actually does quite a good job of baring its epistemological foundations to the world. You can go through edit histories. You can read up on editor discussions and debates. You can get a sense of how the article got to the point that it did. You have to know to look for those things, of course, but they’re there. My impression is that GPT-type chatbots do not do this. They provide an answer, and do not do as much work to lay bare the conflicts and tensions that lead to that answer.
It seems to me that enthusiasm about LLMs as an educational technology is based on its ability to provide students with easy access to facts and knowledge—truth, if you will. However, that seems to me to be a variation of the same naïve epistemology that I’ve been complaining about. Some of the most interesting and progressive approaches to pedagogy and educational technology over the past century(!) have emphasized engaging students in the conflicts and tension that are part of discussing, debating, and determining truth. Good history classrooms teach students to think like historians. Good science classrooms teach students to think like scientists. This is a much more robust epistemology, one that concedes that truth takes some work to get to and invites learners to engage in that work. Why not do more of that instead of retreat to the naïve epistemology of LLMs?
conclusion
Like so many blog posts, this is a jumble of thoughts that have been on my mind and could probably use more polish. I’m not a philosopher, so some of my thinking about epistemology is probably naïve at a meta-level. I don’t have any hands-on experience with LLMs, so I’d be happy for any misunderstandings there to be corrected. For all its roughness, though, this post gets at something that’s been nagging at me for months and something that I think needs a lot more attention. I hope to elaborate on this thinking more in the future.
similar posts:
🔗 linkblog: ‘Improved’ Grok criticizes Democrats and Hollywood’s ‘Jewish executives’
🔗 linkblog: Grok praises Hitler, gives credit to Musk for removing 'woke filters'
🔗 linkblog: xAI posts Grok’s behind-the-scenes prompts
🔗 linkblog: Grok’s “white genocide” obsession came from “unauthorized” prompt edit, xAI says
🔗 linkblog: Google, de moteur de recherche à moteur de réponse
comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis.