Below are posts associated with the “generative AI” tag.
🔗 linkblog: Dead celebrities are apparently fair game for Sora 2 video manipulation
Just bookmarking everything I read on Sora for future grumpiness.
🔗 linkblog: Sora 2 Watermark Removers Flood the Web
Platformizing AI video generation in the way OpenAI is doing right now just makes me grumpier than I already am.
🔗 linkblog: OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real
Look, I know I’m predisposed to not like any new AI product, but this seems horrifying. Gift link.
🔗 linkblog: Research, curriculum and grading: new data sheds light on how professors are using AI
Surprised that more isn’t made of the fact that Anthropic was surveilling users’ conversations for its research. Are professors and students thinking about the company’s ability to read everything they type?
🔗 linkblog: OpenAI’s Sora 2 Copyright Infringement Machine Features Nazi SpongeBobs and Criminal Pikachus
I continue to believe that cracking down on intellectual property is not the right way to resist AI, but Koebler does a great job of describing how maddening it is that big companies are going to get away with worse infringement than individual people taking advantage of fair use.
🔗 linkblog: Librarians Are Being Asked to Find AI-Hallucinated Books
More money for libraries, less for LLMs.
404 Media podcast on generative AI and epistemology
I’m a big fan of the 404 Media tech news outlet, and I also really enjoy their podcast. I especially appreciated an episode that I listened to yesterday, which I’m embedding below as a YouTube video (as an aside, I simply do not understand how YouTube has become a major podcast-listening medium, so it pains me a bit to do this, but I’m once again trying to write something quickly before getting to real work, and YouTube embeds are relatively easy to do in Hugo, so that’s what I’m going with.
🔗 linkblog: The MechaHitler defense contract is raising red flags
Good overview of recent Grok nonsense.
🔗 linkblog: “ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
This is horrifying. Reading the headline is one thing, but reading some of the details is stomach-churning. I’m not a lawyer, and as disgusted as I am with this, I don’t know what legal liability should look like here. I feel more comfortable describing this as ethically bankrupt, though. I think I would have many fewer concerns about generative AI if it weren’t a platformized consumer product. Whatever the right legal response to this is, OpenAI has some moral responsibility for this sort of thing.
🔗 linkblog: Mason County official says data center could bring 400 jobs averaging $80,000; would require massive amounts of power and water
If this is so great for the community, why won’t the company even identify itself publicly?
🔗 linkblog: Kentucky could be on the eve of a data center boom. But in Mason County details are sketchy. • Kentucky Lantern
Helpful reminder that data center problems are not just hypothetical—they’re potentially local.
🔗 linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
why I think labor, not copyright, is the foundational problem with AI scrapers
This morning on Bluesky, I saw some posts about a class action lawsuit against Anthropic for their use of pirated, copyrighted materials in training their generative AI models. One of the sources of these copyrighted materials was the LibGen database, which I took a peek at nearly six months ago to confirm what I was already sure to be true: that my scientific writing was also collected as training material by companies like Anthropic or Meta. I don’t love that big tech companies are profiting off of my work in this way, and I’m sympathetic to the authors who are taking legal action against Anthropic. However, as I’ve written repeatedly over the past few years (you can find some of those thoughts—and others—by scrolling through here, I don’t know that copyright is the right way of responding to this kind of abuse.
🔗 linkblog: UK government suggests deleting files to save water
I genuinely think it’s useful to remember that non-AI datacenters are also contributing to the climate crisis, but that doesn’t let AI off the hook. It’s like saying “sure, we’re spending far beyond our means, but have you considered that we’re already in debt?
🔗 linkblog: Reddit will block the Internet Archive
This sucks. I don’t have a lot of sympathy for Reddit here, which has shown over the past few years a dedicated interest in monetizing its userbase.
🔗 linkblog: Phénomène mondial sur les réseaux sociaux, que sont les Italian Brainrots, ces personnages absurdes générés par IA ?
Ma carrière se divise entre une valorisation des pratiques numériques perçues comme n’ayant pas d’importance et une critique des technologies qui permettent ces pratiques. J’avoue que je me sens vraiment divisé pour cet exemple !
🔗 linkblog: Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online
Look, nothing really new in here (Clearview should have made parents rethink the same ages ago, etc.), but yes, AI should get parents to be a hell of a lot more careful with posting pictures of kids.
🔗 linkblog: Sex is getting scrubbed from the internet, but a billionaire can sell you AI nudes
I hadn’t thought about these two trends (cracking down on adult content, and Grok being Grok) being in tension with each other, and I appreciate what this article does to make that clear.
🔗 linkblog: Grok's 'Spicy' Mode Makes NSFW Celebrity Deepfakes of Women (But Not Men)
Unsurprising but disappointing.
🔗 linkblog: AI industry horrified to face largest copyright class action ever certified
Again, I’m not sure copyright is the way to go in fighting immoral generative AI companies (that the ALA and EFF are on Anthropic’s side seems important to me), but “we have to be able to do this to be successful” still strikes me as such a hollow, self-serving argument.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
There are, of course, social benefits to open sourcing powerful tools like these ones. However, I’m reminded of “open source” Android, which is a deliberate business decision that benefits Google—and of how many NCII-generating tools are based on open weight/open source models. gift link
🔗 linkblog: Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes
I’m glad that someone is doing this white hat work, but I hate that we live in a world where someone has to.