Below are posts associated with the “generative AI” tag.
🔗 linkblog: Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.
A few thoughts:
First, it is almost comically mean to use the results of a project collecting AI tells to get LLMs to not sound like that. Like, of all the digital labor exploitations of AI, this might be the pettiest.
Second, AI detection is hard, and for all my concerns with AI, I think this is another good example of why policing its use can do more harm than good. I don’t blame the Wikipedia community for doing this project, but I would never recommend this approach in a classroom.
Ellul, nuclear weapons, and generative AI
One of the most interesting recurring themes in Jacques Ellul’s writing is one that contrasts reality (or facts) with truth. As Ellul distinguishes them, facts are what are and—implicitly—what must be conformed to, whereas truth is what ought to be. Ellul’s The Humiliation of the Word explores this distinction at length, but it crops up in plenty of his other writing. In fact, I’m currently reading his Présence au monde moderne (or rereading it, depending on what one considers reading the original French after reading the English translation last year), and I’m delighted to see that he makes this distinction as early as this 1948 book.
🔗 linkblog: Grok Is Generating Sexual Content Far More Graphic Than What's on X
Pair this with Emanuel Maiberg’s article I linked to earlier, and there’s a lot to think about.
I sometimes wonder if base Grok is less wild than integrated-with-Twitter Grok, but this is at least one way in which that’s not true.
🔗 linkblog: Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Oof, this line:
what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators
🔗 linkblog: Pluralistic: Writing vs AI (07 Jan 2026) – Pluralistic: Daily links from Cory Doctorow
I have largely abstained from the “AI misses the point of writing” discourse, but Cory knocks it out of the park here.
🔗 linkblog: Grok Is Pushing AI ‘Undressing’ Mainstream
Bookmarking all these articles on Grok for rage fuel.
🔗 linkblog: No, Grok can’t really “apologize” for posting non-consensual sexual images
Bookmarking because this is an important point.
digital labor and generative AI: what Stack Overflow CEO Prashanth Chandrasekhar gets wrong
This morning, while getting ready for the day, I spent some time catching up on podcasts, including Nilay Patel’s interview of Stack Overflow CEO Prashanth Chandrasekhar on a recent episode of Decoder (a podcast I’ve spent a lot more time listening to since it went ad free for subscribers). I ditched the Stack Exchange network a year and a half ago over digital labor concerns—I was literally being prevented from deleting my own content from the site, which is bonkers—and I’m honestly not sure why I bookmarked the interview for listening a few days ago. I think it was more than a hate listen, though: For all of my own feelings about generative AI, I make an effort to be open minded, and I was interested in the headline for the interview: “Stack Overflow users don’t trust AI. They’re using it anyway.”
🔗 linkblog: Disney wants to drag you into the slop
I missed the detail about Disney+ using some of the Sora output, and that makes this whole thing even more about labor exploitation.
🔗 linkblog: I Am Time Magazine’s Person of the Year
I disagree with the copyright framing here (it’s a labor issue), but otherwise, I think this is a good take.
🔗 linkblog: Huge Trove of Nude Images Leaked by AI Image Generator Startup’s Exposed Database
Are we willing to pay this price in order to have some neat image generation tools? (I’m not.)
🔗 linkblog: AI Slop Is Ruining Reddit for Everyone
This sucks—and even more so because the Reddit company is willing to play nice with AI to get their pretty penny.
🔗 linkblog: Elon Musk's Grok AI Is Doxxing Home Addresses of Everyday People
Surely Elon “assassination coordinates” Musk is outraged that his own AI would do this. Right?
📚 bookblog: Alex + Ada, Volume 2 (❤️❤️❤️❤️🖤)
Okay, once I got over the ways that generative AI have ruined the premise, it’s not a terrible story. It’s not deep or particularly original, but I enjoyed it enough to be more generous this time around.
📚 bookblog: Alex + Ada, Volume 1 (❤️❤️❤️🖤🖤)
I read this series ages ago; when I got it through an Image Humble Bundle, I decided it was worth a reread.
The art isn’t bad, and the basic ideas of the series are interesting, but it’s remarkable how much generative AI has kind of ruined what the series could be.
So much of this reads differently now: the premise of people seeking companionship in sycophantic robots, the secondary premise of people being convinced that there’s true intelligence behind the scenes just waiting to be unlocked, the idea of “robots rights” in a society that’s skeptical of artificial intelligence. What would have been pretty standard scifi 4 years ago now hits differently, feeling like an allegory for the most delusional parts of pro-AI advocacy.
🔗 linkblog: Epic CEO Tim Sweeney says Steam should drop its ‘Made with AI’ tags
If one idea from Ellul has made the most impact on me, it’s his fierce criticism of attitudes of inevitability.
🔗 linkblog: OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
I genuinely don’t know what legal liability for generative AI products should look like, but arguing that the onus was on the kid and his family because of TOS strikes me as incredibly shitty, not to mention falling back on “look, we have a mission to benefit humanity by building AI, have you taken that into account?”
🔗 linkblog: UK among first universities to collaborate with Microsoft on AI
This just makes me want to dig my heels in further.