Below are posts associated with the “generative Ai” tag.
🔗 linkblog: Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors
Booooooo. That Wikipedia is being mined by AI scrapers and negatively affected by AI search is such a perfect encapsulation of my concerns about generative AI.
🔗 linkblog: Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet
Some good (scary) observations in here—not least speculation on what xAI’s version of Sora would look like.
🔗 linkblog: What the Arrival of A.I. Video Generators Like Sora Means for Us
Strong Ellul vibes in this passage:
The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it. Society as a whole will have to treat videos with as much skepticism as people already do words.
Unclear, though, whether Ellul would be cool with increased skepticism of the image or angry at the technology causing it.
🔗 linkblog: OpenAI wasn’t expecting Sora’s copyright drama
Something feels off here. An AI CEO who claims they genuinely didn’t anticipate copyright and deepfake concerns is either dumb or playing dumb. I can’t help but suspect the latter, which is arguably worse, since it suggests an effort to shift the discourse before complaints come in.
🔗 linkblog: Dead celebrities are apparently fair game for Sora 2 video manipulation
Just bookmarking everything I read on Sora for future grumpiness.
🔗 linkblog: Sora 2 Watermark Removers Flood the Web
Platformizing AI video generation in the way OpenAI is doing right now just makes me grumpier than I already am.
🔗 linkblog: OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real
Look, I know I’m predisposed to not like any new AI product, but this seems horrifying. Gift link.
🔗 linkblog: Research, curriculum and grading: new data sheds light on how professors are using AI
Surprised that more isn’t made of the fact that Anthropic was surveilling users’ conversations for its research. Are professors and students thinking about the company’s ability to read everything they type?
🔗 linkblog: OpenAI’s Sora 2 Copyright Infringement Machine Features Nazi SpongeBobs and Criminal Pikachus
I continue to believe that cracking down on intellectual property is not the right way to resist AI, but Koebler does a great job of describing how maddening it is that big companies are going to get away with worse infringement than individual people taking advantage of fair use.
🔗 linkblog: Librarians Are Being Asked to Find AI-Hallucinated Books
More money for libraries, less for LLMs.
404 Media podcast on generative AI and epistemology
I’m a big fan of the 404 Media tech news outlet, and I also really enjoy their podcast. I especially appreciated an episode that I listened to yesterday, which I’m embedding below as a YouTube video (as an aside, I simply do not understand how YouTube has become a major podcast-listening medium, so it pains me a bit to do this, but I’m once again trying to write something quickly before getting to real work, and YouTube embeds are relatively easy to do in Hugo, so that’s what I’m going with.
🔗 linkblog: The MechaHitler defense contract is raising red flags
Good overview of recent Grok nonsense.
🔗 linkblog: “ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
This is horrifying. Reading the headline is one thing, but reading some of the details is stomach-churning. I’m not a lawyer, and as disgusted as I am with this, I don’t know what legal liability should look like here. I feel more comfortable describing this as ethically bankrupt, though. I think I would have many fewer concerns about generative AI if it weren’t a platformized consumer product. Whatever the right legal response to this is, OpenAI has some moral responsibility for this sort of thing.
🔗 linkblog: Mason County official says data center could bring 400 jobs averaging $80,000; would require massive amounts of power and water
If this is so great for the community, why won’t the company even identify itself publicly?
🔗 linkblog: Kentucky could be on the eve of a data center boom. But in Mason County details are sketchy. • Kentucky Lantern
Helpful reminder that data center problems are not just hypothetical—they’re potentially local.
🔗 linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
why I think labor, not copyright, is the foundational problem with AI scrapers
This morning on Bluesky, I saw some posts about a class action lawsuit against Anthropic for their use of pirated, copyrighted materials in training their generative AI models. One of the sources of these copyrighted materials was the LibGen database, which I took a peek at nearly six months ago to confirm what I was already sure to be true: that my scientific writing was also collected as training material by companies like Anthropic or Meta. I don’t love that big tech companies are profiting off of my work in this way, and I’m sympathetic to the authors who are taking legal action against Anthropic. However, as I’ve written repeatedly over the past few years (you can find some of those thoughts—and others—by scrolling through here, I don’t know that copyright is the right way of responding to this kind of abuse.
🔗 linkblog: UK government suggests deleting files to save water
I genuinely think it’s useful to remember that non-AI datacenters are also contributing to the climate crisis, but that doesn’t let AI off the hook. It’s like saying “sure, we’re spending far beyond our means, but have you considered that we’re already in debt?
🔗 linkblog: Reddit will block the Internet Archive
This sucks. I don’t have a lot of sympathy for Reddit here, which has shown over the past few years a dedicated interest in monetizing its userbase.
🔗 linkblog: Phénomène mondial sur les réseaux sociaux, que sont les Italian Brainrots, ces personnages absurdes générés par IA ?
Ma carrière se divise entre une valorisation des pratiques numériques perçues comme n’ayant pas d’importance et une critique des technologies qui permettent ces pratiques. J’avoue que je me sens vraiment divisé pour cet exemple !
🔗 linkblog: Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online
Look, nothing really new in here (Clearview should have made parents rethink the same ages ago, etc.), but yes, AI should get parents to be a hell of a lot more careful with posting pictures of kids.
🔗 linkblog: Sex is getting scrubbed from the internet, but a billionaire can sell you AI nudes
I hadn’t thought about these two trends (cracking down on adult content, and Grok being Grok) being in tension with each other, and I appreciate what this article does to make that clear.