Below are posts associated with the “generative AI” tag.
🔗 linkblog: What’s the Point of School When AI Can Do Your Homework?
The headline isn’t what I would have chosen, but there’s a lot worth reflecting on in here.
🔗 linkblog: The RAM shortage is coming for everything you care about
Love that I get to worry about deepfake nudes, scramble to change the way I assess, and now pay more for tech—if it’s even available.
🔗 linkblog: Big Tech Says Generative AI Will Save the Planet. It Doesn't Offer Much Proof
Important, helpful read.
🔗 linkblog: 'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
So many horrifying details crammed into a single article. Grateful to be a 404 Media subscriber and angry at ed tech AI grift.
🔗 linkblog: OpenAI Introduces Premium Video Generator For White House Advisors Manipulating Trump
Excellent jokes to distract from the real horror.
🔗 linkblog: ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Horrifying stories like this should be in our minds every time we think about AI.
🔗 linkblog: Deepfake ‘Nudify’ Technology Is Getting Darker—and More Dangerous
Look, I’m open to the argument that there are legitimate, good uses of generative AI, but I think anyone making that argument needs to address stuff like this.
🔗 linkblog: New AI-Generated Content Derived from Your Work Posted on Academia.Edu
I guess I should be reading this for the jokes, but I hadn’t realized Academia.edu had done this, and I’m so angry at the inspiration for the jokes that I haven’t made it any further.
🔗 linkblog: Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.
A few thoughts:
First, it is almost comically mean to use the results of a project collecting AI tells to get LLMs to not sound like that. Like, of all the digital labor exploitations of AI, this might be the pettiest.
Second, AI detection is hard, and for all my concerns with AI, I think this is another good example of why policing its use can do more harm than good. I don’t blame the Wikipedia community for doing this project, but I would never recommend this approach in a classroom.
Ellul, nuclear weapons, and generative AI
One of the most interesting recurring themes in Jacques Ellul’s writing is one that contrasts reality (or facts) with truth. As Ellul distinguishes them, facts are what are and—implicitly—what must be conformed to, whereas truth is what ought to be. Ellul’s The Humiliation of the Word explores this distinction at length, but it crops up in plenty of his other writing. In fact, I’m currently reading his Présence au monde moderne (or rereading it, depending on what one considers reading the original French after reading the English translation last year), and I’m delighted to see that he makes this distinction as early as this 1948 book.
🔗 linkblog: Grok Is Generating Sexual Content Far More Graphic Than What's on X
Pair this with Emanuel Maiberg’s article I linked to earlier, and there’s a lot to think about.
I sometimes wonder if base Grok is less wild than integrated-with-Twitter Grok, but this is at least one way in which that’s not true.
🔗 linkblog: Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Oof, this line:
what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators
🔗 linkblog: Pluralistic: Writing vs AI (07 Jan 2026) – Pluralistic: Daily links from Cory Doctorow
I have largely abstained from the “AI misses the point of writing” discourse, but Cory knocks it out of the park here.
🔗 linkblog: Grok Is Pushing AI ‘Undressing’ Mainstream
Bookmarking all these articles on Grok for rage fuel.
🔗 linkblog: No, Grok can’t really “apologize” for posting non-consensual sexual images
Bookmarking because this is an important point.
digital labor and generative AI: what Stack Overflow CEO Prashanth Chandrasekhar gets wrong
This morning, while getting ready for the day, I spent some time catching up on podcasts, including Nilay Patel’s interview of Stack Overflow CEO Prashanth Chandrasekhar on a recent episode of Decoder (a podcast I’ve spent a lot more time listening to since it went ad free for subscribers). I ditched the Stack Exchange network a year and a half ago over digital labor concerns—I was literally being prevented from deleting my own content from the site, which is bonkers—and I’m honestly not sure why I bookmarked the interview for listening a few days ago. I think it was more than a hate listen, though: For all of my own feelings about generative AI, I make an effort to be open minded, and I was interested in the headline for the interview: “Stack Overflow users don’t trust AI. They’re using it anyway.”
🔗 linkblog: Disney wants to drag you into the slop
I missed the detail about Disney+ using some of the Sora output, and that makes this whole thing even more about labor exploitation.
🔗 linkblog: I Am Time Magazine’s Person of the Year
I disagree with the copyright framing here (it’s a labor issue), but otherwise, I think this is a good take.