Below are posts associated with the “generative AI” tag.
🔗 linkblog: Un adolescent espagnol accusé de créer des images dénudées de ses camarades de classe par intelligence artificielle et de les vendre
Quel monde pourri qui attend nos enfants.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
🔗 linkblog: ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It
You know, I skipped over this story when it come out in a couple of other outlets, but seeing the headline again here got me thinking about how good/scary of an example this is of LLMs shaping (rather than reflecting) reality.
🔗 linkblog: The Em Dash Responds to the AI Allegations
As a committed em dash user, this has been bugging me since I heard about it.
🔗 linkblog: 'Doing their own thing': KY legislators hear about the state of AI use and guidance in schools
I can see the value in some state guidelines, but I suspect they would be more permissive than what I want for my classroom. I hope I’ll still have the chance to establish restrictions as I see fit.
🔗 linkblog: Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People
Gonna keep posting (almost) every article I read on NCII and generative AI.
🔗 linkblog: a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise
This line (from a study quoted in the article) stood out:
The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes.
Open sourcing generative AI solves some problems but creates others.
🔗 linkblog: AI 'Nudify' Websites Are Raking in Millions of Dollars
On, look, it’s two of my least favorite things about generative AI (NCII and raking in money without concern for ethics) IN THE SAME STORY.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge. Rather, I think it demonstrates (or should demonstrate) a commitment to the process of knowledge production, and LLMs cannot truly compete with humans there.
🔗 linkblog: A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet
Surely this is a reasonable price to pay for the Nazi-praising Grok to “discover new physics” within the next year, as Elon promised last night.
This kind of thing is why I hate “the genie is out of the bottle” arguments. I can’t help but hear them as “yes, people are going to create more CSAM, but all we can do is instead teach people to use these tools more responsibly.”
🔗 linkblog: Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Yesterday, I wrote my thoughts on how Grok’s “Nazi meltdown” helps illustrate some of my concerns about AI and epistemology.
This coverage of Grok’s latest demo only reinforces that—Musk’s tinkering with the LLM to get the results he wants is at odds with his states naïve epistemology that an LLM can be “maximally truth-seeking,” as though there is a self-evident truth that an LLM can deliver in a straightforward way (that is, without all that mucking about behind the scenes).
on Grok, other LLMs, and epistemology
Yesterday, I blogged (en français) on Jacques Ellul’s emphasis on the need for a technology-responsive ethic that emphasizes (among other values) tension and conflict. Ellul explores this ethic—one of non-power—in a few different writings that feel like different drafts of the same thing, and so I’ve seen that emphasis come up a few times as I’ve tried reading his work. Every time, it surprises me a little bit. Why, in articulating an ethical framework, would you emphasize tension and conflict?
🔗 linkblog: Grok praises Hitler, gives credit to Musk for removing 'woke filters'
Disgusting and deliberate.
🔗 linkblog: Microsoft, OpenAI, and a US Teachers’ Union Are Hatching a Plan to ‘Bring AI into the Classroom’
It feels like it’s Big Tech’s world and schools are just living in it.
Jacques Ellul contre l'IA
Ça fait plusieurs mois que je m’intéresse aux écrits de Jacques Ellul comme base théorique pour comprendre les techniques et technologies de nos jours. En fait, j’ai déjà écrit en février au sujet de l’intelligence artificielle générative et combien l’œuvre d’Ellul semble utile pour les critiques de l’IA malgré le fait qu’Ellul a vécu et écrit bien avant l’ère de l’IA comme nous la connaissons aujourd’hui.
Je suis en train de lire son livre posthume Théologie et technique (bien lentement, il faut l’avouer—j’avais commencé le livre en mai avant de devoir recommencer il y a quelques jours), et je trouve qu’il y a plusieurs passages qui me semblent utile lors des débats actuels au sujet de l’IA générative.
🔗 linkblog: Emily Bender: L'IA est un perroquet stochastique sans faculté de raisonnement
Voici des rappels importants.
🔗 linkblog: OpenAI and Microsoft Bankroll New A.I. Training for Teachers
Don’t know what to say here except that I don’t like any of this. Reminded of two arguments from Ellul:
First, that an effective ethics of technology considers systematic effects, not “good” uses vs. “bad” uses,
Second, that “because it exists” is not sufficient justification for adopting a technology.
Anyway, here’s the gift link.
🔗 linkblog: ‘Improved’ Grok criticizes Democrats and Hollywood’s ‘Jewish executives’
More on why we need to talk epistemology when we talk generative AI:
Musk tweaking his AI model to be more aligned with right-wing edgelords was inevitable, but there’s a broader point to be made: each AI model is a black box that supposedly gives objective answers but in reality is shaped by its owners. As more people look to AI to learn about the world, the people who control how it’s trained and how it responds will control our prevailing narratives.