Below are posts associated with the “generative Ai” tag.
🔗 linkblog: University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop
Well, this is certainly… something.
📚 bookblog: Old Media (❤️❤️❤️🖤🖤)
Eh, it felt like this was a continuation of some of my least favorite parts of Autonomous. I am also struggling to enjoy “robots’ rights” stories in our LLM era, which is dumb, but that’s how it is.
🔗 linkblog: Pluralistic: A Pascal’s Wager for AI Doomers (16 Apr 2026) – Pluralistic: Daily links from Cory Doctorow
I’ve felt for a long time that “what if AI gets sentient and does irreparable harm” is 100% the wrong way of framing things, and Doctorow knocks that argument out of the park here.
🔗 linkblog: Ronan Farrow on Sam Altman’s “unconstrained” relationship with the truth
This was an enlightening listen on my way into work this morning.
hallucination in the LLM-based Kagi Translate
You don’t have to spend long on my blog to figure out that I default to being grumpy about generative AI, but if I’ve made one exception to that rule, it’s for Kagi Translate, which I’ve found to be a genuinely helpful machine translation tool—and to have some neat features that I haven’t found in its Google or DeepL equivalents.
It took me back a little bit tonight, then, when Kagi Translate straight up hallucinated something on me, in a way that I imagine wouldn’t be out of place for a more mainstream LLM (which I’ve never really used). Earlier today, while working on a paper for an upcoming conference, I was consulting a Jacques Ellul book I was about to cite, and I wanted to make sure that “genetic engineering” would be an accurate translation for his phrase « intervention génétique » (which could obviously also be rendered “genetic intervention,” but I’ve never heard that phrase in my life, so I’d prefer to go with a more well-known phrase if it’s accurate).
🔗 linkblog: The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought
Audrey Watters once compellingly argued that metal detectors are edtech. I think we now have a responsibility to treat AI nudifier apps as edtech, too.
🔗 linkblog: To teach in the time of ChatGPT is to know pain
Really appreciate this essay. It puts things nicely and has the kind of personal investment that makes it relatable.
🔗 linkblog: Police corporal created AI porn from driver's license pics
So gross. I don’t think we can talk about generative AI without talking about this.
🔗 linkblog: What the heck is wrong with our AI overlords?
I wrote recently about how my concerns about (generative AI) are probably more about the broader Ellulian system of technique than the specifics of the technology. Here’s a passage from this article that makes a similar point better:
For some tasks, AI really is amazing; the tech behind things like machine-learning algorithms and large language models is ingenious, but the results always seem to be hawked the hardest by people and companies I don’t particularly like or trust. (Heck, Anthropic used one of my books to train its database, a sin for which it is now paying authors in court.) Give me the same sorts of tools but under my local control, governed by a Wikipedia-style nonprofit and trained on ethically sourced data, and I’d use them a lot more.
🔗 linkblog: The New York Times Got Played By A Telehealth Scam And Called It The Future Of AI
Masnick’s fierce critique is all the more notable for how public he is that AI is good for some things, pushing back against grumpier folks (e.g., me).
Check this paragraph out, though:
What we actually have here is a marketing operation that used AI to automate the production of deceptive advertising at a scale and speed that would have been harder to achieve otherwise. Snake oil salesmen have existed forever. What AI gave Matthew Gallagher (and, I guess, his affiliates) was the ability to crank out fake doctors, fabricated testimonials, and deepfaked before-and-after photos faster than any human team could — and to do it cheap enough that a guy with $20,000 and no morals could build it from his house. That’s the actual AI story the Times should have written.
🔗 linkblog: DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America’s Nuclear Power Regulator
So much about this that I don’t like. The article makes a good case that there may be good reasons to ease up on nuclear power regulations, but the language of AI and VCs suggests to me that those good reasons aren’t the top priority.
🔗 linkblog: UK hosts literacy training in AI to teach attendees of its potential
Two grumps (and, to be clear, I’m grumpy at my employer, not the student reporter):
the framing here is, as usual, “how to use” whether than “should we use”
“misinformation” is centered as the (implicitly sole) problem with generative AI, not digital labor or any of the deeper issues
🔗 linkblog: Pluralistic: It’s extremely good that Claude’s source-code leaked (02 Apr 2026) – Pluralistic: Daily links from Cory Doctorow
Didn’t expect from the headline that this would turn into an essay on copyright, but I’m glad it did:
Expanding copyright will gain little for creative workers, except for a new reason to be angry about how our audiences experience our work. Expanding labor rights will gain much, for every worker, including our audiences. It’s an idea that our bosses – and AI hucksters – hate with every fiber of their beings.
🔗 linkblog: Sam Altman: ‘If I Don’t End The World, Someone Far More Dangerous Will’
The depressing thing is that this isn’t that far off from how OpenAI and Anthropic think.
🔗 linkblog: I Asked ChatGPT What WIRED’s Reviewers Recommend—Its Answers Were All Wrong
Interesting article here. I don’t read WIRED (or The Verge, or…) for the product reviews, but it’s not hard to see how generative AI can create issues for them in that way.
🔗 linkblog: Webtoon is adding AI localization tools to its comics platform
I read a fair amount of comics in translation, and even when the translation is done with a skilled human, I can always tell that there’s something off about it. Not sure I trust an LLM to fix that problem.
Also, I wish that Webtoon weren’t platformizing webcomics and that we could go back to the models we had in the 2000s and 2010s.
🔗 linkblog: Jessica Foster, la citoyenne-soldate 'parfaite' du camp MAGA qui n'existe pas | RTS
Histoire fascinante—mais inquiétante.
what I dislike about AI isn't the tech (and why I like Ellulian 'technique')
Last Thursday, I listened to a recent episode of The Vergecast during my morning bike commute. The episode featured Paul Ford talking about his recent experience with Claude Code, and I was genuinely surprised to find some of his comments resonating with me. It helped that Ford wasn’t uncritical about AI (though certainly not as critical as I would have been), but some of it was just that I recognized some of the thrill that he was describing of using tools and resources to learn how to solve a problem. In fact, I found that thrill so contagious that a passing comment he made got me to spend some time once I got to the office converting my Twitter archive into a CSV that I could finally import it into the Day One journaling app that I use.
🔗 linkblog: 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
Required reading, imo.
🔗 linkblog: Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature
Oh, okay, maybe not shame so much as butt-covering.
🔗 linkblog: Grammarly says it will stop using AI to clone experts without permission
Oh look, they are capable of shame.
🔗 linkblog: Grammarly will keep using authors’ identities without permission unless they opt out
Opt out is a terrible way of doing this. I’m so angry that I didn’t even finish the article before posting.
🔗 linkblog: Grammarly is using our identities without permission
Wild escalation of digital labor issues in generative AI.