Below are posts associated with the “AI” tag.
🔗 linkblog: What the heck is wrong with our AI overlords?
I wrote recently about how my concerns about (generative AI) are probably more about the broader Ellulian system of technique than the specifics of the technology. Here’s a passage from this article that makes a similar point better:
For some tasks, AI really is amazing; the tech behind things like machine-learning algorithms and large language models is ingenious, but the results always seem to be hawked the hardest by people and companies I don’t particularly like or trust. (Heck, Anthropic used one of my books to train its database, a sin for which it is now paying authors in court.) Give me the same sorts of tools but under my local control, governed by a Wikipedia-style nonprofit and trained on ethically sourced data, and I’d use them a lot more.
🔗 linkblog: The New York Times Got Played By A Telehealth Scam And Called It The Future Of AI
Masnick’s fierce critique is all the more notable for how public he is that AI is good for some things, pushing back against grumpier folks (e.g., me).
Check this paragraph out, though:
What we actually have here is a marketing operation that used AI to automate the production of deceptive advertising at a scale and speed that would have been harder to achieve otherwise. Snake oil salesmen have existed forever. What AI gave Matthew Gallagher (and, I guess, his affiliates) was the ability to crank out fake doctors, fabricated testimonials, and deepfaked before-and-after photos faster than any human team could — and to do it cheap enough that a guy with $20,000 and no morals could build it from his house. That’s the actual AI story the Times should have written.
🔗 linkblog: DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America’s Nuclear Power Regulator
So much about this that I don’t like. The article makes a good case that there may be good reasons to ease up on nuclear power regulations, but the language of AI and VCs suggests to me that those good reasons aren’t the top priority.
🔗 linkblog: 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
Required reading, imo.
🔗 linkblog: Grammarly says it will stop using AI to clone experts without permission
Oh look, they are capable of shame.
🔗 linkblog: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?
Pretty sure The Onion accelerated the web publication of this deliciously vicious skewering of Sam Altman after last weekend’s making nice with the Pentagon.
🔗 linkblog: How OpenAI caved to the Pentagon on AI surveillance
An important read on OpenAI’s seeming selling out.
🔗 linkblog: Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’
It takes a lot to get me on Anthropic’s side in any disagreement, but Pete Hegseth is a lot, so I guess this tracks.
🔗 linkblog: Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
Anthropic is weird, and their conscience is focused in some directions at the expense of others (Claude is trained on pirated copies of my research), but at least they have a conscience.
🔗 linkblog: What’s the Point of School When AI Can Do Your Homework?
The headline isn’t what I would have chosen, but there’s a lot worth reflecting on in here.
🔗 linkblog: The RAM shortage is coming for everything you care about
Love that I get to worry about deepfake nudes, scramble to change the way I assess, and now pay more for tech—if it’s even available.
🔗 linkblog: Big Tech Says Generative AI Will Save the Planet. It Doesn't Offer Much Proof
Important, helpful read.
🔗 linkblog: 'Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School
So many horrifying details crammed into a single article. Grateful to be a 404 Media subscriber and angry at ed tech AI grift.
🔗 linkblog: ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Horrifying stories like this should be in our minds every time we think about AI.
digital labor and generative AI: what Stack Overflow CEO Prashanth Chandrasekhar gets wrong
This morning, while getting ready for the day, I spent some time catching up on podcasts, including Nilay Patel’s interview of Stack Overflow CEO Prashanth Chandrasekhar on a recent episode of Decoder (a podcast I’ve spent a lot more time listening to since it went ad free for subscribers). I ditched the Stack Exchange network a year and a half ago over digital labor concerns—I was literally being prevented from deleting my own content from the site, which is bonkers—and I’m honestly not sure why I bookmarked the interview for listening a few days ago. I think it was more than a hate listen, though: For all of my own feelings about generative AI, I make an effort to be open minded, and I was interested in the headline for the interview: “Stack Overflow users don’t trust AI. They’re using it anyway.”
📚 bookblog: Alex + Ada, Volume 1 (❤️❤️❤️🖤🖤)
I read this series ages ago; when I got it through an Image Humble Bundle, I decided it was worth a reread.
The art isn’t bad, and the basic ideas of the series are interesting, but it’s remarkable how much generative AI has kind of ruined what the series could be.
So much of this reads differently now: the premise of people seeking companionship in sycophantic robots, the secondary premise of people being convinced that there’s true intelligence behind the scenes just waiting to be unlocked, the idea of “robots rights” in a society that’s skeptical of artificial intelligence. What would have been pretty standard scifi 4 years ago now hits differently, feeling like an allegory for the most delusional parts of pro-AI advocacy.
🔗 linkblog: UK among first universities to collaborate with Microsoft on AI
This just makes me want to dig my heels in further.
🔗 linkblog: UK must be ‘partner-of-choice’ in using AI to advance Kentucky
Honestly trying to figure out whether the reason I see Ellul everywhere is because I’m excited about a new scholar I’ve discovered or because his ideas are so well suited for the current moment. “We can be a leader or we can be left behind” captures the opt-in determinism of Ellul’s technique so dang well.
Of course, how the heck am I going to keep expressing concern about AI (through an Ellulian lens or otherwise) if the university has already decided that we’re all getting on board?
🔗 linkblog: The Real Stakes, and Real Story, of Peter Thiel’s Antichrist Obsession
I’m completely serious when I say that Peter Thiel makes me want to get a seminary degree, because if we’re going to have theology about AI, I want it to be better than his.
🔗 linkblog: In Unhinged Speech, Pete Hegseth Says He's Tired of ‘Fat Troops,’ Says Military Needs to Go Full AI
Don’t know if this is better or worse than what I worried about.
🔗 linkblog: Peter Thiel: strict AI regulation will summon the Antichrist
I’ve wanted to get a seminary degree for a while, and I’ve often wondered if my seminary thesis would be on theology and technology, but I never expected to be in dueling theologies with Peter Thiel.
🔗 linkblog: “ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
This is horrifying. Reading the headline is one thing, but reading some of the details is stomach-churning. I’m not a lawyer, and as disgusted as I am with this, I don’t know what legal liability should look like here. I feel more comfortable describing this as ethically bankrupt, though. I think I would have many fewer concerns about generative AI if it weren’t a platformized consumer product. Whatever the right legal response to this is, OpenAI has some moral responsibility for this sort of thing.
🔗 linkblog: The NSF just cut K-12 STEM Education research going forward
Appreciate Josh’s eye for detail here.
🔗 linkblog: Mason County official says data center could bring 400 jobs averaging $80,000; would require massive amounts of power and water
If this is so great for the community, why won’t the company even identify itself publicly?