Below are posts associated with the “generative AI” tag.
🔗 linkblog: 'Doing their own thing': KY legislators hear about the state of AI use and guidance in schools
I can see the value in some state guidelines, but I suspect they would be more permissive than what I want for my classroom. I hope I’ll still have the chance to establish restrictions as I see fit.
🔗 linkblog: Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People
Gonna keep posting (almost) every article I read on NCII and generative AI.
🔗 linkblog: a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise
This line (from a study quoted in the article) stood out:
The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes.
Open sourcing generative AI solves some problems but creates others.
🔗 linkblog: AI 'Nudify' Websites Are Raking in Millions of Dollars
On, look, it’s two of my least favorite things about generative AI (NCII and raking in money without concern for ethics) IN THE SAME STORY.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge. Rather, I think it demonstrates (or should demonstrate) a commitment to the process of knowledge production, and LLMs cannot truly compete with humans there.
🔗 linkblog: A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet
Surely this is a reasonable price to pay for the Nazi-praising Grok to “discover new physics” within the next year, as Elon promised last night.
This kind of thing is why I hate “the genie is out of the bottle” arguments. I can’t help but hear them as “yes, people are going to create more CSAM, but all we can do is instead teach people to use these tools more responsibly.”
🔗 linkblog: Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Yesterday, I wrote my thoughts on how Grok’s “Nazi meltdown” helps illustrate some of my concerns about AI and epistemology.
This coverage of Grok’s latest demo only reinforces that—Musk’s tinkering with the LLM to get the results he wants is at odds with his states naïve epistemology that an LLM can be “maximally truth-seeking,” as though there is a self-evident truth that an LLM can deliver in a straightforward way (that is, without all that mucking about behind the scenes).
on Grok, other LLMs, and epistemology
Yesterday, I blogged (en français) on Jacques Ellul’s emphasis on the need for a technology-responsive ethic that emphasizes (among other values) tension and conflict. Ellul explores this ethic—one of non-power—in a few different writings that feel like different drafts of the same thing, and so I’ve seen that emphasis come up a few times as I’ve tried reading his work. Every time, it surprises me a little bit. Why, in articulating an ethical framework, would you emphasize tension and conflict?
🔗 linkblog: Grok praises Hitler, gives credit to Musk for removing 'woke filters'
Disgusting and deliberate.
🔗 linkblog: Microsoft, OpenAI, and a US Teachers’ Union Are Hatching a Plan to ‘Bring AI into the Classroom’
It feels like it’s Big Tech’s world and schools are just living in it.
Jacques Ellul contre l'IA
Ça fait plusieurs mois que je m’intéresse aux écrits de Jacques Ellul comme base théorique pour comprendre les techniques et technologies de nos jours. En fait, j’ai déjà écrit en février au sujet de l’intelligence artificielle générative et combien l’œuvre d’Ellul semble utile pour les critiques de l’IA malgré le fait qu’Ellul a vécu et écrit bien avant l’ère de l’IA comme nous la connaissons aujourd’hui.
Je suis en train de lire son livre posthume Théologie et technique (bien lentement, il faut l’avouer—j’avais commencé le livre en mai avant de devoir recommencer il y a quelques jours), et je trouve qu’il y a plusieurs passages qui me semblent utile lors des débats actuels au sujet de l’IA générative.
🔗 linkblog: Emily Bender: L'IA est un perroquet stochastique sans faculté de raisonnement
Voici des rappels importants.
🔗 linkblog: OpenAI and Microsoft Bankroll New A.I. Training for Teachers
Don’t know what to say here except that I don’t like any of this. Reminded of two arguments from Ellul:
First, that an effective ethics of technology considers systematic effects, not “good” uses vs. “bad” uses,
Second, that “because it exists” is not sufficient justification for adopting a technology.
Anyway, here’s the gift link.
🔗 linkblog: ‘Improved’ Grok criticizes Democrats and Hollywood’s ‘Jewish executives’
More on why we need to talk epistemology when we talk generative AI:
Musk tweaking his AI model to be more aligned with right-wing edgelords was inevitable, but there’s a broader point to be made: each AI model is a black box that supposedly gives objective answers but in reality is shaped by its owners. As more people look to AI to learn about the world, the people who control how it’s trained and how it responds will control our prevailing narratives.
🔗 linkblog: Google, de moteur de recherche à moteur de réponse
Voilà pourquoi il faut parler de la théorie de connaissance quand on parle de IA:
On est passé d’un moteur de recherche à un moteur de réponse. C’est-à-dire que les algorithmes proposent des versions rédigées à partir des données qu’ils auront collectées sur Internet, puis reformulées sans que vous ayez rendu visite aux sites contenant ces éléments de réponse à votre requête.
🔗 linkblog: Laid-off workers should use AI to manage their emotions, says Xbox exec
I can’t find the right words for how this story makes me feel.
🔗 linkblog: Kids are making deepfakes of each other, and laws aren’t keeping up – The Markup
This problem makes me so angry, and while I appreciate this article’s exploration of different policy solutions, they also feel overwhelming to me because so many of them come with problems of their own.
📚 bookblog: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI (❤️❤️❤️❤️❤️)
This is a good book, with a powerful thesis and a great epilogue that ties things together. It isn’t perfect, but I think most of my quibbles are related to the subject matter and the genre. It’s hard to write a book about a contemporary subject of such importance, and I think it’s tricky to write a book that combines history with more of a critical take on the AI ecosystem.
🔗 linkblog: Reddit turns 20, and it’s going big on AI
Reddit is a really interesting example of digital labor issues as they relate to both social media and AI. I wonder how things will go over the next few years.
🔗 linkblog: Radio Télévision Suisse A Neuchâtel aussi, les téléphones portables seront interdits à l'école obligatoire
Bon, je comprends ces soucis, mais je ne suis pas sûr que de telles interdictions soient la bonne réponse. Pourtant, vu que je suis plus ouvert à une interdiction de l’IA à l’école, il faut que je développe un peu plus ma philosophie ici.
🔗 linkblog: Facebook is starting to feed its Meta AI with private, unpublished photos
What. The. Hell. Is. This. Nonsense.
🔗 linkblog: Fanfiction writers battle AI, one scrape at a time
Fanfiction is one of the most compelling examples of the labor issues related to generative AI.