Below are posts associated with the “AI” tag.
🔗 linkblog: 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
Required reading, imo.
🔗 linkblog: Grammarly says it will stop using AI to clone experts without permission
Oh look, they are capable of shame.
🔗 linkblog: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?
Pretty sure The Onion accelerated the web publication of this deliciously vicious skewering of Sam Altman after last weekend’s making nice with the Pentagon.
🔗 linkblog: How OpenAI caved to the Pentagon on AI surveillance
An important read on OpenAI’s seeming selling out.
🔗 linkblog: Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’
It takes a lot to get me on Anthropic’s side in any disagreement, but Pete Hegseth is a lot, so I guess this tracks.
🔗 linkblog: Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
Anthropic is weird, and their conscience is focused in some directions at the expense of others (Claude is trained on pirated copies of my research), but at least they have a conscience.
🔗 linkblog: What’s the Point of School When AI Can Do Your Homework?
The headline isn’t what I would have chosen, but there’s a lot worth reflecting on in here.
🔗 linkblog: The RAM shortage is coming for everything you care about
Love that I get to worry about deepfake nudes, scramble to change the way I assess, and now pay more for tech—if it’s even available.
🔗 linkblog: Big Tech Says Generative AI Will Save the Planet. It Doesn't Offer Much Proof
Important, helpful read.
🔗 linkblog: 'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
So many horrifying details crammed into a single article. Grateful to be a 404 Media subscriber and angry at ed tech AI grift.
🔗 linkblog: ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Horrifying stories like this should be in our minds every time we think about AI.
digital labor and generative AI: what Stack Overflow CEO Prashanth Chandrasekhar gets wrong
This morning, while getting ready for the day, I spent some time catching up on podcasts, including Nilay Patel’s interview of Stack Overflow CEO Prashanth Chandrasekhar on a recent episode of Decoder (a podcast I’ve spent a lot more time listening to since it went ad free for subscribers). I ditched the Stack Exchange network a year and a half ago over digital labor concerns—I was literally being prevented from deleting my own content from the site, which is bonkers—and I’m honestly not sure why I bookmarked the interview for listening a few days ago. I think it was more than a hate listen, though: For all of my own feelings about generative AI, I make an effort to be open minded, and I was interested in the headline for the interview: “Stack Overflow users don’t trust AI. They’re using it anyway.”
📚 bookblog: Alex + Ada, Volume 1 (❤️❤️❤️🖤🖤)
I read this series ages ago; when I got it through an Image Humble Bundle, I decided it was worth a reread.
The art isn’t bad, and the basic ideas of the series are interesting, but it’s remarkable how much generative AI has kind of ruined what the series could be.
So much of this reads differently now: the premise of people seeking companionship in sycophantic robots, the secondary premise of people being convinced that there’s true intelligence behind the scenes just waiting to be unlocked, the idea of “robots rights” in a society that’s skeptical of artificial intelligence. What would have been pretty standard scifi 4 years ago now hits differently, feeling like an allegory for the most delusional parts of pro-AI advocacy.
🔗 linkblog: UK among first universities to collaborate with Microsoft on AI
This just makes me want to dig my heels in further.
🔗 linkblog: UK must be ‘partner-of-choice’ in using AI to advance Kentucky
Honestly trying to figure out whether the reason I see Ellul everywhere is because I’m excited about a new scholar I’ve discovered or because his ideas are so well suited for the current moment. “We can be a leader or we can be left behind” captures the opt-in determinism of Ellul’s technique so dang well.
Of course, how the heck am I going to keep expressing concern about AI (through an Ellulian lens or otherwise) if the university has already decided that we’re all getting on board?
🔗 linkblog: The Real Stakes, and Real Story, of Peter Thiel’s Antichrist Obsession
I’m completely serious when I say that Peter Thiel makes me want to get a seminary degree, because if we’re going to have theology about AI, I want it to be better than his.
🔗 linkblog: In Unhinged Speech, Pete Hegseth Says He's Tired of ‘Fat Troops,’ Says Military Needs to Go Full AI
Don’t know if this is better or worse than what I worried about.
🔗 linkblog: Peter Thiel: strict AI regulation will summon the Antichrist
I’ve wanted to get a seminary degree for a while, and I’ve often wondered if my seminary thesis would be on theology and technology, but I never expected to be in dueling theologies with Peter Thiel.
🔗 linkblog: “ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
This is horrifying. Reading the headline is one thing, but reading some of the details is stomach-churning. I’m not a lawyer, and as disgusted as I am with this, I don’t know what legal liability should look like here. I feel more comfortable describing this as ethically bankrupt, though. I think I would have many fewer concerns about generative AI if it weren’t a platformized consumer product. Whatever the right legal response to this is, OpenAI has some moral responsibility for this sort of thing.
🔗 linkblog: The NSF just cut K-12 STEM Education research going forward
Appreciate Josh’s eye for detail here.
🔗 linkblog: Mason County official says data center could bring 400 jobs averaging $80,000; would require massive amounts of power and water
If this is so great for the community, why won’t the company even identify itself publicly?
🔗 linkblog: Kentucky could be on the eve of a data center boom. But in Mason County details are sketchy. • Kentucky Lantern
Helpful reminder that data center problems are not just hypothetical—they’re potentially local.
🔗 linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
🔗 linkblog: UK government suggests deleting files to save water
I genuinely think it’s useful to remember that non-AI datacenters are also contributing to the climate crisis, but that doesn’t let AI off the hook. It’s like saying “sure, we’re spending far beyond our means, but have you considered that we’re already in debt?