Below are posts associated with the “generative AI” tag.
🔗 linkblog: To teach in the time of ChatGPT is to know pain
Really appreciate this essay. It puts things nicely and has the kind of personal investment that makes it relatable.
🔗 linkblog: Police corporal created AI porn from driver's license pics
So gross. I don’t think we can talk about generative AI without talking about this.
🔗 linkblog: What the heck is wrong with our AI overlords?
I wrote recently about how my concerns about (generative AI) are probably more about the broader Ellulian system of technique than the specifics of the technology. Here’s a passage from this article that makes a similar point better:
For some tasks, AI really is amazing; the tech behind things like machine-learning algorithms and large language models is ingenious, but the results always seem to be hawked the hardest by people and companies I don’t particularly like or trust. (Heck, Anthropic used one of my books to train its database, a sin for which it is now paying authors in court.) Give me the same sorts of tools but under my local control, governed by a Wikipedia-style nonprofit and trained on ethically sourced data, and I’d use them a lot more.
🔗 linkblog: The New York Times Got Played By A Telehealth Scam And Called It The Future Of AI
Masnick’s fierce critique is all the more notable for how public he is that AI is good for some things, pushing back against grumpier folks (e.g., me).
Check this paragraph out, though:
What we actually have here is a marketing operation that used AI to automate the production of deceptive advertising at a scale and speed that would have been harder to achieve otherwise. Snake oil salesmen have existed forever. What AI gave Matthew Gallagher (and, I guess, his affiliates) was the ability to crank out fake doctors, fabricated testimonials, and deepfaked before-and-after photos faster than any human team could — and to do it cheap enough that a guy with $20,000 and no morals could build it from his house. That’s the actual AI story the Times should have written.
🔗 linkblog: DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America’s Nuclear Power Regulator
So much about this that I don’t like. The article makes a good case that there may be good reasons to ease up on nuclear power regulations, but the language of AI and VCs suggests to me that those good reasons aren’t the top priority.
🔗 linkblog: UK hosts literacy training in AI to teach attendees of its potential
Two grumps (and, to be clear, I’m grumpy at my employer, not the student reporter):
the framing here is, as usual, “how to use” whether than “should we use”
“misinformation” is centered as the (implicitly sole) problem with generative AI, not digital labor or any of the deeper issues
🔗 linkblog: Pluralistic: It’s extremely good that Claude’s source-code leaked (02 Apr 2026) – Pluralistic: Daily links from Cory Doctorow
Didn’t expect from the headline that this would turn into an essay on copyright, but I’m glad it did:
Expanding copyright will gain little for creative workers, except for a new reason to be angry about how our audiences experience our work. Expanding labor rights will gain much, for every worker, including our audiences. It’s an idea that our bosses – and AI hucksters – hate with every fiber of their beings.
🔗 linkblog: Sam Altman: ‘If I Don’t End The World, Someone Far More Dangerous Will’
The depressing thing is that this isn’t that far off from how OpenAI and Anthropic think.
🔗 linkblog: I Asked ChatGPT What WIRED’s Reviewers Recommend—Its Answers Were All Wrong
Interesting article here. I don’t read WIRED (or The Verge, or…) for the product reviews, but it’s not hard to see how generative AI can create issues for them in that way.
🔗 linkblog: Webtoon is adding AI localization tools to its comics platform
I read a fair amount of comics in translation, and even when the translation is done with a skilled human, I can always tell that there’s something off about it. Not sure I trust an LLM to fix that problem.
Also, I wish that Webtoon weren’t platformizing webcomics and that we could go back to the models we had in the 2000s and 2010s.
🔗 linkblog: Jessica Foster, la citoyenne-soldate 'parfaite' du camp MAGA qui n'existe pas | RTS
Histoire fascinante—mais inquiétante.
what I dislike about AI isn't the tech (and why I like Ellulian 'technique')
Last Thursday, I listened to a recent episode of The Vergecast during my morning bike commute. The episode featured Paul Ford talking about his recent experience with Claude Code, and I was genuinely surprised to find some of his comments resonating with me. It helped that Ford wasn’t uncritical about AI (though certainly not as critical as I would have been), but some of it was just that I recognized some of the thrill that he was describing of using tools and resources to learn how to solve a problem. In fact, I found that thrill so contagious that a passing comment he made got me to spend some time once I got to the office converting my Twitter archive into a CSV that I could finally import it into the Day One journaling app that I use.
🔗 linkblog: 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
Required reading, imo.
🔗 linkblog: Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature
Oh, okay, maybe not shame so much as butt-covering.
🔗 linkblog: Grammarly says it will stop using AI to clone experts without permission
Oh look, they are capable of shame.
🔗 linkblog: Grammarly will keep using authors’ identities without permission unless they opt out
Opt out is a terrible way of doing this. I’m so angry that I didn’t even finish the article before posting.
🔗 linkblog: Grammarly is using our identities without permission
Wild escalation of digital labor issues in generative AI.
🔗 linkblog: Anthropic’s Statement To The ‘Department Of War’ Reads Like A Hostage Note Written In Business Casual
Good observations here. My respect for Anthropic was solely based on their seeming willingness to stand up for something, because otherwise, I have a lot of issues wirh them. This groveling makes that respect disappear.
🔗 linkblog: OpenAI’s ‘Red Lines’ Are Written In The NSA’s Dictionary—Where Words Mean What The NSA Wants Them To Mean
Masnick—who is far keener on the idea of generative AI than I will ever be—is unsparing in his critique of OpenAI here, and it’s worth a read.
🔗 linkblog: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?
Pretty sure The Onion accelerated the web publication of this deliciously vicious skewering of Sam Altman after last weekend’s making nice with the Pentagon.
🔗 linkblog: How OpenAI caved to the Pentagon on AI surveillance
An important read on OpenAI’s seeming selling out.
🔗 linkblog: Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’
It takes a lot to get me on Anthropic’s side in any disagreement, but Pete Hegseth is a lot, so I guess this tracks.
🔗 linkblog: Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
Anthropic is weird, and their conscience is focused in some directions at the expense of others (Claude is trained on pirated copies of my research), but at least they have a conscience.