Below are posts associated with the “generative AI” tag.
what I dislike about AI isn't the tech (and why I like Ellulian 'technique')
Last Thursday, I listened to a recent episode of The Vergecast during my morning bike commute. The episode featured Paul Ford talking about his recent experience with Claude Code, and I was genuinely surprised to find some of his comments resonating with me. It helped that Ford wasn’t uncritical about AI (though certainly not as critical as I would have been), but some of it was just that I recognized some of the thrill that he was describing of using tools and resources to learn how to solve a problem. In fact, I found that thrill so contagious that a passing comment he made got me to spend some time once I got to the office converting my Twitter archive into a CSV that I could finally import it into the Day One journaling app that I use.
🔗 linkblog: 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
Required reading, imo.
🔗 linkblog: Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature
Oh, okay, maybe not shame so much as butt-covering.
🔗 linkblog: Grammarly says it will stop using AI to clone experts without permission
Oh look, they are capable of shame.
🔗 linkblog: Grammarly will keep using authors’ identities without permission unless they opt out
Opt out is a terrible way of doing this. I’m so angry that I didn’t even finish the article before posting.
🔗 linkblog: Grammarly is using our identities without permission
Wild escalation of digital labor issues in generative AI.
🔗 linkblog: Anthropic’s Statement To The ‘Department Of War’ Reads Like A Hostage Note Written In Business Casual
Good observations here. My respect for Anthropic was solely based on their seeming willingness to stand up for something, because otherwise, I have a lot of issues wirh them. This groveling makes that respect disappear.
🔗 linkblog: OpenAI’s ‘Red Lines’ Are Written In The NSA’s Dictionary—Where Words Mean What The NSA Wants Them To Mean
Masnick—who is far keener on the idea of generative AI than I will ever be—is unsparing in his critique of OpenAI here, and it’s worth a read.
🔗 linkblog: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?
Pretty sure The Onion accelerated the web publication of this deliciously vicious skewering of Sam Altman after last weekend’s making nice with the Pentagon.
🔗 linkblog: How OpenAI caved to the Pentagon on AI surveillance
An important read on OpenAI’s seeming selling out.
🔗 linkblog: Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’
It takes a lot to get me on Anthropic’s side in any disagreement, but Pete Hegseth is a lot, so I guess this tracks.
🔗 linkblog: Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
Anthropic is weird, and their conscience is focused in some directions at the expense of others (Claude is trained on pirated copies of my research), but at least they have a conscience.
🔗 linkblog: What’s the Point of School When AI Can Do Your Homework?
The headline isn’t what I would have chosen, but there’s a lot worth reflecting on in here.
🔗 linkblog: The RAM shortage is coming for everything you care about
Love that I get to worry about deepfake nudes, scramble to change the way I assess, and now pay more for tech—if it’s even available.
🔗 linkblog: Big Tech Says Generative AI Will Save the Planet. It Doesn't Offer Much Proof
Important, helpful read.
🔗 linkblog: 'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
So many horrifying details crammed into a single article. Grateful to be a 404 Media subscriber and angry at ed tech AI grift.
🔗 linkblog: OpenAI Introduces Premium Video Generator For White House Advisors Manipulating Trump
Excellent jokes to distract from the real horror.
🔗 linkblog: ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Horrifying stories like this should be in our minds every time we think about AI.
🔗 linkblog: Deepfake ‘Nudify’ Technology Is Getting Darker—and More Dangerous
Look, I’m open to the argument that there are legitimate, good uses of generative AI, but I think anyone making that argument needs to address stuff like this.
🔗 linkblog: New AI-Generated Content Derived from Your Work Posted on Academia.Edu
I guess I should be reading this for the jokes, but I hadn’t realized Academia.edu had done this, and I’m so angry at the inspiration for the jokes that I haven’t made it any further.
🔗 linkblog: Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.
A few thoughts:
First, it is almost comically mean to use the results of a project collecting AI tells to get LLMs to not sound like that. Like, of all the digital labor exploitations of AI, this might be the pettiest.
Second, AI detection is hard, and for all my concerns with AI, I think this is another good example of why policing its use can do more harm than good. I don’t blame the Wikipedia community for doing this project, but I would never recommend this approach in a classroom.