What the heck is wrong with our AI overlords?
date linked: 7 April 2026
source: link to article, from arstechnica.com
I wrote recently about how my concerns about (generative AI) are probably more about the broader Ellulian system of technique than the specifics of the technology. Here’s a passage from this article that makes a similar point better:
For some tasks, AI really is amazing; the tech behind things like machine-learning algorithms and large language models is ingenious, but the results always seem to be hawked the hardest by people and companies I don’t particularly like or trust. (Heck, Anthropic used one of my books to train its database, a sin for which it is now paying authors in court.) Give me the same sorts of tools but under my local control, governed by a Wikipedia-style nonprofit and trained on ethically sourced data, and I’d use them a lot more.
similar posts:
🔗 linkblog: Anyone Else Have Those Weird Dreams Where Sobbing Future Generations Beg You To Change Course?
📚 bookblog: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI (❤️❤️❤️❤️❤️)
more on the Liahona, efficiency, and technique
🔗 linkblog: Sam Altman: ‘If I Don’t End The World, Someone Far More Dangerous Will’
🔗 linkblog: How OpenAI caved to the Pentagon on AI surveillance
comments:
You can click on the < button in the top-right of your browser window to read and write comments on this post with Hypothesis.