Below are posts associated with the “OpenAI” tag.
🔗 linkblog: Disney wants to drag you into the slop
I missed the detail about Disney+ using some of the Sora output, and that makes this whole thing even more about labor exploitation.
🔗 linkblog: OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora
Involving Disney, who infamously stiffed Alan Dean Foster on Star Wars royalties, so clearly demonstrates how the underlying issue with generative AI isn’t copyright, it’s labor.
🔗 linkblog: OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
I genuinely don’t know what legal liability for generative AI products should look like, but arguing that the onus was on the kid and his family because of TOS strikes me as incredibly shitty, not to mention falling back on “look, we have a mission to benefit humanity by building AI, have you taken that into account?”
🔗 linkblog: Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet
Some good (scary) observations in here—not least speculation on what xAI’s version of Sora would look like.
🔗 linkblog: What the Arrival of A.I. Video Generators Like Sora Means for Us
Strong Ellul vibes in this passage:
The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it. Society as a whole will have to treat videos with as much skepticism as people already do words.
Unclear, though, whether Ellul would be cool with increased skepticism of the image or angry at the technology causing it.
🔗 linkblog: People Are Crashing Out Over Sora 2’s New Guardrails
Look, maybe this is a genuine misstep on OpenAI’s part, but it still feels to me like the company started with the guardrails off so that it could use this kind of user backlash to push the Overton Window in conversations with rightsholders.
Also, remember that we small potatoes rightsholders will never be able to have our voices heard like Disney or Nintendo.
🔗 linkblog: OpenAI wasn’t expecting Sora’s copyright drama
Something feels off here. An AI CEO who claims they genuinely didn’t anticipate copyright and deepfake concerns is either dumb or playing dumb. I can’t help but suspect the latter, which is arguably worse, since it suggests an effort to shift the discourse before complaints come in.
Jacques Ellul contre l'appli Sora
Un peu par hasard, j’ai fini récemment ma lecture de deux livres différents par Jacques Ellul : Théologie et technique ainsi que Humiliation of the Word (la traduction anglaise de La parole humiliée, car je vais devoir en écrire en anglais, et j’avoue en plus que mon français n’est pas toujours à la hauteur d’Ellul « en V.O. »). Ça fait plusieurs jours que j’ai envie d’écrire quelque chose sur la relation image-parole qu’il établit dans les pages de La parole humilié, et je compte toujours écrire ce post-là, mais en terminant Théologie et technique, j’ai été frappé par un passage qui ressemble beaucoup ce dont j’avais envie d’écrire dans l’autre livre.
🔗 linkblog: Dead celebrities are apparently fair game for Sora 2 video manipulation
Just bookmarking everything I read on Sora for future grumpiness.
🔗 linkblog: Sora 2 Watermark Removers Flood the Web
Platformizing AI video generation in the way OpenAI is doing right now just makes me grumpier than I already am.
🔗 linkblog: OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real
Look, I know I’m predisposed to not like any new AI product, but this seems horrifying. Gift link.
🔗 linkblog: Tech leaders take turns flattering Trump at White House dinner
Ugh, this article makes it sound even worse.
🔗 linkblog: “ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
This is horrifying. Reading the headline is one thing, but reading some of the details is stomach-churning. I’m not a lawyer, and as disgusted as I am with this, I don’t know what legal liability should look like here. I feel more comfortable describing this as ethically bankrupt, though. I think I would have many fewer concerns about generative AI if it weren’t a platformized consumer product. Whatever the right legal response to this is, OpenAI has some moral responsibility for this sort of thing.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
There are, of course, social benefits to open sourcing powerful tools like these ones. However, I’m reminded of “open source” Android, which is a deliberate business decision that benefits Google—and of how many NCII-generating tools are based on open weight/open source models. gift link
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: What is AGI? Nobody agrees, and it’s tearing Microsoft and OpenAI apart.
Karen Hao’s Empire of AI really emphasized for me how much stock is being put in AGI—especially as a motivator for AI companies. I am fine wirh concepts being hard to define, but I do think things get tricky when you can’t articulate how you’ll know when you’ve met the goal that serves as your raison d’être.
🔗 linkblog: Microsoft, OpenAI, and a US Teachers’ Union Are Hatching a Plan to ‘Bring AI into the Classroom’
It feels like it’s Big Tech’s world and schools are just living in it.
📚 bookblog: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI (❤️❤️❤️❤️❤️)
This is a good book, with a powerful thesis and a great epilogue that ties things together. It isn’t perfect, but I think most of my quibbles are related to the subject matter and the genre. It’s hard to write a book about a contemporary subject of such importance, and I think it’s tricky to write a book that combines history with more of a critical take on the AI ecosystem.
🔗 linkblog: OpenAI and Anthropic are fighting over college students with free AI
I was already planning to voice skepticism about Apple partnerships with universities in a manuscript I’m writing, but now I’ve got this to cite as well.
🔗 linkblog: OpenAI's viral Studio Ghibli moment highlights AI copyright concerns | TechCrunch
Generative AI products make me mad, I don’t like them, and I’m not going to defend them. That said, if this gets framed as a copyright problem, is there any way to give Studio Ghibli (or Pixar or the Seuss estate) power to cry foul here that doesn’t also shut down fan art, parodies, and the like? I’m skeptical, and that’s why I think “labor” is the more productive—if more legally ambiguous—framing here.
🔗 linkblog: OpenAI declares AI race “over” if training on copyrighted works isn’t fair use'
I believe that scraping the internet to profit off of generative AI is ethically problematic BUT I concede that it should be fair use BUT this is still a soulless and terrible argument.