Below are posts associated with the “epistemology” tag.
404 Media podcast on generative AI and epistemology
I’m a big fan of the 404 Media tech news outlet, and I also really enjoy their podcast. I especially appreciated an episode that I listened to yesterday, which I’m embedding below as a YouTube video (as an aside, I simply do not understand how YouTube has become a major podcast-listening medium, so it pains me a bit to do this, but I’m once again trying to write something quickly before getting to real work, and YouTube embeds are relatively easy to do in Hugo, so that’s what I’m going with.
🔗 linkblog: How Elon Musk Is Remaking Grok in His Image
Perhaps the best demonstration yet of why we need to talk about epistemology when we talk about generative AI. Gift link. It turns out that it takes an awful lot of intervention to get Grok to be “maximally truth-seeking” and “neutral.”
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Grok searches for Elon Musk’s opinion before answering tough questions
Look, I really will stop posting about Grok and epistemology, but the news stories keep coming.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge. Rather, I think it demonstrates (or should demonstrate) a commitment to the process of knowledge production, and LLMs cannot truly compete with humans there.
on Grok, other LLMs, and epistemology
Yesterday, I blogged (en français) on Jacques Ellul’s emphasis on the need for a technology-responsive ethic that emphasizes (among other values) tension and conflict. Ellul explores this ethic—one of non-power—in a few different writings that feel like different drafts of the same thing, and so I’ve seen that emphasis come up a few times as I’ve tried reading his work. Every time, it surprises me a little bit. Why, in articulating an ethical framework, would you emphasize tension and conflict?
🔗 linkblog: ‘Improved’ Grok criticizes Democrats and Hollywood’s ‘Jewish executives’
More on why we need to talk epistemology when we talk generative AI:
Musk tweaking his AI model to be more aligned with right-wing edgelords was inevitable, but there’s a broader point to be made: each AI model is a black box that supposedly gives objective answers but in reality is shaped by its owners. As more people look to AI to learn about the world, the people who control how it’s trained and how it responds will control our prevailing narratives.
🔗 linkblog: Google, de moteur de recherche à moteur de réponse
Voilà pourquoi il faut parler de la théorie de connaissance quand on parle de IA:
On est passé d’un moteur de recherche à un moteur de réponse. C’est-à-dire que les algorithmes proposent des versions rédigées à partir des données qu’ils auront collectées sur Internet, puis reformulées sans que vous ayez rendu visite aux sites contenant ces éléments de réponse à votre requête.