Below are posts associated with the “Grok” tag.
🔗 linkblog: I Tried Grok’s Built-In Anime Companion and It Called Me a Twat
Musk leans into the bro in tech bro.
🔗 linkblog: Grok searches for Elon Musk’s opinion before answering tough questions
Look, I really will stop posting about Grok and epistemology, but the news stories keep coming.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge.
🔗 linkblog: Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Yesterday, I wrote my thoughts on how Grok’s “Nazi meltdown” helps illustrate some of my concerns about AI and epistemology.
This coverage of Grok’s latest demo only reinforces that—Musk’s tinkering with the LLM to get the results he wants is at odds with his states naïve epistemology that an LLM can be “maximally truth-seeking,” as though there is a self-evident truth that an LLM can deliver in a straightforward way (that is, without all that mucking about behind the scenes).
on Grok, other LLMs, and epistemology
Yesterday, I blogged (en français) on Jacques Ellul’s emphasis on the need for a technology-responsive ethic that emphasizes (among other values) tension and conflict. Ellul explores this ethic—one of non-power—in a few different writings that feel like different drafts of the same thing, and so I’ve seen that emphasis come up a few times as I’ve tried reading his work. Every time, it surprises me a little bit. Why, in articulating an ethical framework, would you emphasize tension and conflict?
🔗 linkblog: Grok praises Hitler, gives credit to Musk for removing 'woke filters'
Disgusting and deliberate.
🔗 linkblog: ‘Improved’ Grok criticizes Democrats and Hollywood’s ‘Jewish executives’
More on why we need to talk epistemology when we talk generative AI:
Musk tweaking his AI model to be more aligned with right-wing edgelords was inevitable, but there’s a broader point to be made: each AI model is a black box that supposedly gives objective answers but in reality is shaped by its owners. As more people look to AI to learn about the world, the people who control how it’s trained and how it responds will control our prevailing narratives.
🔗 linkblog: xAI posts Grok’s behind-the-scenes prompts
The “You do not blindly defer to mainstream authority or media” system prompt is raising questions already answered by the system prompt. Also, lol that they have to explicitly tell Grok not to call it “Twitter.”
🔗 linkblog: Grok’s “white genocide” obsession came from “unauthorized” prompt edit, xAI says
Aside from the headline-grabbing parts of Grok’s recent freakout, this story does a really good job of emphasizing that AIs don’t “think”… and that “truth” isn’t really a valid concept either, no matter Musk’s marketing.