Below are posts associated with the “generative Ai” tag.
🔗 linkblog: Grok's 'Spicy' Mode Makes NSFW Celebrity Deepfakes of Women (But Not Men)
Unsurprising but disappointing.
🔗 linkblog: AI industry horrified to face largest copyright class action ever certified
Again, I’m not sure copyright is the way to go in fighting immoral generative AI companies (that the ALA and EFF are on Anthropic’s side seems important to me), but “we have to be able to do this to be successful” still strikes me as such a hollow, self-serving argument.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
There are, of course, social benefits to open sourcing powerful tools like these ones. However, I’m reminded of “open source” Android, which is a deliberate business decision that benefits Google—and of how many NCII-generating tools are based on open weight/open source models. gift link
🔗 linkblog: Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes
I’m glad that someone is doing this white hat work, but I hate that we live in a world where someone has to.
🔗 linkblog: Un adolescent espagnol accusé de créer des images dénudées de ses camarades de classe par intelligence artificielle et de les vendre
Quel monde pourri qui attend nos enfants.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
🔗 linkblog: ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It
You know, I skipped over this story when it come out in a couple of other outlets, but seeing the headline again here got me thinking about how good/scary of an example this is of LLMs shaping (rather than reflecting) reality.
🔗 linkblog: The Em Dash Responds to the AI Allegations
As a committed em dash user, this has been bugging me since I heard about it.
🔗 linkblog: 'Doing their own thing': KY legislators hear about the state of AI use and guidance in schools
I can see the value in some state guidelines, but I suspect they would be more permissive than what I want for my classroom. I hope I’ll still have the chance to establish restrictions as I see fit.
🔗 linkblog: Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People
Gonna keep posting (almost) every article I read on NCII and generative AI.
🔗 linkblog: a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise
This line (from a study quoted in the article) stood out:
The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes.
Open sourcing generative AI solves some problems but creates others.
🔗 linkblog: AI 'Nudify' Websites Are Raking in Millions of Dollars
On, look, it’s two of my least favorite things about generative AI (NCII and raking in money without concern for ethics) IN THE SAME STORY.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge. Rather, I think it demonstrates (or should demonstrate) a commitment to the process of knowledge production, and LLMs cannot truly compete with humans there.
🔗 linkblog: A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet
Surely this is a reasonable price to pay for the Nazi-praising Grok to “discover new physics” within the next year, as Elon promised last night.
This kind of thing is why I hate “the genie is out of the bottle” arguments. I can’t help but hear them as “yes, people are going to create more CSAM, but all we can do is instead teach people to use these tools more responsibly.”
🔗 linkblog: Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Yesterday, I wrote my thoughts on how Grok’s “Nazi meltdown” helps illustrate some of my concerns about AI and epistemology.
This coverage of Grok’s latest demo only reinforces that—Musk’s tinkering with the LLM to get the results he wants is at odds with his states naïve epistemology that an LLM can be “maximally truth-seeking,” as though there is a self-evident truth that an LLM can deliver in a straightforward way (that is, without all that mucking about behind the scenes).
on Grok, other LLMs, and epistemology
Yesterday, I blogged (en français) on Jacques Ellul’s emphasis on the need for a technology-responsive ethic that emphasizes (among other values) tension and conflict. Ellul explores this ethic—one of non-power—in a few different writings that feel like different drafts of the same thing, and so I’ve seen that emphasis come up a few times as I’ve tried reading his work. Every time, it surprises me a little bit. Why, in articulating an ethical framework, would you emphasize tension and conflict?