Below are posts associated with the “OpenAI” tag.
🔗 linkblog: OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real
Look, I know I’m predisposed to not like any new AI product, but this seems horrifying. Gift link.
🔗 linkblog: Tech leaders take turns flattering Trump at White House dinner
Ugh, this article makes it sound even worse.
🔗 linkblog: “ChatGPT killed my son”: Parents’ lawsuit describes suicide notes in chat logs
This is horrifying. Reading the headline is one thing, but reading some of the details is stomach-churning. I’m not a lawyer, and as disgusted as I am with this, I don’t know what legal liability should look like here. I feel more comfortable describing this as ethically bankrupt, though. I think I would have many fewer concerns about generative AI if it weren’t a platformized consumer product. Whatever the right legal response to this is, OpenAI has some moral responsibility for this sort of thing.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
There are, of course, social benefits to open sourcing powerful tools like these ones. However, I’m reminded of “open source” Android, which is a deliberate business decision that benefits Google—and of how many NCII-generating tools are based on open weight/open source models. gift link
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: What is AGI? Nobody agrees, and it’s tearing Microsoft and OpenAI apart.
Karen Hao’s Empire of AI really emphasized for me how much stock is being put in AGI—especially as a motivator for AI companies. I am fine wirh concepts being hard to define, but I do think things get tricky when you can’t articulate how you’ll know when you’ve met the goal that serves as your raison d’être.
🔗 linkblog: Microsoft, OpenAI, and a US Teachers’ Union Are Hatching a Plan to ‘Bring AI into the Classroom’
It feels like it’s Big Tech’s world and schools are just living in it.
📚 bookblog: Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI (❤️❤️❤️❤️❤️)
This is a good book, with a powerful thesis and a great epilogue that ties things together. It isn’t perfect, but I think most of my quibbles are related to the subject matter and the genre. It’s hard to write a book about a contemporary subject of such importance, and I think it’s tricky to write a book that combines history with more of a critical take on the AI ecosystem.
🔗 linkblog: OpenAI and Anthropic are fighting over college students with free AI
I was already planning to voice skepticism about Apple partnerships with universities in a manuscript I’m writing, but now I’ve got this to cite as well.
🔗 linkblog: OpenAI's viral Studio Ghibli moment highlights AI copyright concerns | TechCrunch
Generative AI products make me mad, I don’t like them, and I’m not going to defend them. That said, if this gets framed as a copyright problem, is there any way to give Studio Ghibli (or Pixar or the Seuss estate) power to cry foul here that doesn’t also shut down fan art, parodies, and the like? I’m skeptical, and that’s why I think “labor” is the more productive—if more legally ambiguous—framing here.
🔗 linkblog: OpenAI declares AI race “over” if training on copyrighted works isn’t fair use'
I believe that scraping the internet to profit off of generative AI is ethically problematic BUT I concede that it should be fair use BUT this is still a soulless and terrible argument.
🔗 linkblog: OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us'
Yeah, it’s really hard to have any sympathy here at all.
🔗 linkblog: Certain names make ChatGPT grind to a halt, and we know why'
Interesting stuff here. I think most complaints about OpenAI “censorship” are hogwash, but it’s still fascinating—and worrying—to see how much control the company exercises over its product.
🔗 linkblog: ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It'
This is a darker version of some of the thoughts I had when I first heard about the “PhD comparison.”
Before you click through to the article, I also want to use this short post as a complaint that I don’t think “intelligence” is a thing—and that PhDs certainly wouldn’t be a measure of it if it were.
🔗 linkblog: Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic'
In a roundabout way, I think this helps demonstrate why scraping data for generative AI isn’t a question of copyright. Even when there is a legal agreement, it can still be exploitative—it’s a question of digital labor.
🔗 linkblog: OpenAI launches programs making ChatGPT cheaper for schools and nonprofits'
Oh, please no no no. I usually read a whole article before posting it, but just the first few paragraphs are giving me such a visceral reaction that I don’t know if I’ll make it through the rest. The existing tech giants already have such a hold on us, let’s please not let OpenAI in the door.
🔗 linkblog: OpenAI loses its voice'
Look, it shouldn’t take this story for people to realize that OpenAI exploits others’ contributions to make its products, but if it does the trick, I’ll take it. (And this is admittedly creepier than its base-level exploitation.)
🔗 linkblog: Stack Overflow users sabotage their posts after OpenAI deal'
Some better, broader coverage of complaints I made in a blog post earlier this week.
🔗 linkblog: OpenAI, Mass Scraper of Copyrighted Work, Claims Copyright Over Subreddit's Logo'
I don’t think intellectual property is the way to fight back against generative AI, but it is wildly out of line for a company who profits off using other’s intellectual property to be this petty.
Stack Exchange and digital labor
Today, Stack Overflow announced that it was entering into a partnership with OpenAI to provide data from the former to the latter for the purposes of training ChatGPT, etc. I’ve used Stack Overflow a fair amount over the years, and there have also been times where I tried to get into some of the other Stack Exchange sites, contributing both questions and answers. I haven’t really been active on any of these sites in recent times, but I still decided to take a couple of minutes this afternoon and follow the advice of one outraged Mastodon post: delete my contributions and shut down my accounts.
🔗 linkblog: Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks'
Hmm. Unsurprising but all the more frustrating for it.
🔗 linkblog: OpenAI went back on a promise to make key documents public | Ars Technica'
If OpenAI is going to be an influential company, it would be nice for it to be more transparent.
🔗 linkblog: I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy - The Verge'
Yeah, but don’t worry, this is definitely the only way that generative AI will be used to overwhelm us with useless content.
🔗 linkblog: Pluralistic: The real AI fight (27 Nov 2023) – Pluralistic: Daily links from Cory Doctorow'
I haven’t been following this debate, but Doctorow and White’s points resonate with me.
🔗 linkblog: An Iowa school district is using ChatGPT to decide which books to ban - The Verge'
Even if ChatGPT could be trusted to do this task, “let’s remove books from libraries with less work” is a good example of how efficiency isn’t always a good thing.
🔗 linkblog: OpenAI wants GPT-4 to solve the content moderation dilemma - The Verge'
Look, if an automated process could save human moderators from the awful work they have to do, I’d be all for it. I’m unconvinced that GPT-4 could do it, though.
🔗 linkblog: AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian'
So many important points in this piece.
🔗 linkblog: Now you can block OpenAI’s web crawler - The Verge'
This is a welcome step, but I’m concerned it’s an empty, distracting gesture—it certainly doesn’t solve the deeper issue.
🔗 linkblog: Reddit Won’t Be the Same. Neither Will the Internet | WIRED'
Good focus on the digital labor aspects of this whole thing. I sympathize with Reddit for not wanting to provide free value for generative AI (this is one of the trickiest parts of that conversation), but Reddit’s users are right to balk at providing free value for the platform.
🔗 linkblog: OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation - The Verge'
Last paragraph here is an important one: I’ve seen a lot of headlines about OpenAI calling for regulation, but it’s noteworthy that it’s hypothetical future regulation.
🔗 linkblog: ChatGPT Is So Bad at Essays That Professors Can Spot It Instantly'
Lots of helpful stuff in here.