Below you will find pages that utilize the taxonomy term “OpenAI”
🔗 linkblog: my thoughts on 'ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It'
- kudos:This is a darker version of some of the thoughts I had when I first heard about the “PhD comparison.” Before you click through to the article, I also want to use this short post as a complaint that I don’t think “intelligence” is a thing—and that PhDs certainly wouldn’t be a measure of it if it were. link to “ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It”
🔗 linkblog: my thoughts on 'Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic'
- kudos:In a roundabout way, I think this helps demonstrate why scraping data for generative AI isn’t a question of copyright. Even when there is a legal agreement, it can still be exploitative—it’s a question of digital labor. link to “Journalists ‘deeply troubled’ by OpenAI’s content deals with Vox, The Atlantic”
🔗 linkblog: my thoughts on 'OpenAI launches programs making ChatGPT cheaper for schools and nonprofits'
- kudos:Oh, please no no no. I usually read a whole article before posting it, but just the first few paragraphs are giving me such a visceral reaction that I don’t know if I’ll make it through the rest. The existing tech giants already have such a hold on us, let’s please not let OpenAI in the door. link to “OpenAI launches programs making ChatGPT cheaper for schools and nonprofits”
🔗 linkblog: my thoughts on 'OpenAI loses its voice'
- kudos:Look, it shouldn’t take this story for people to realize that OpenAI exploits others’ contributions to make its products, but if it does the trick, I’ll take it. (And this is admittedly creepier than its base-level exploitation.) link to “OpenAI loses its voice”
🔗 linkblog: my thoughts on 'Stack Overflow users sabotage their posts after OpenAI deal'
- kudos:Some better, broader coverage of complaints I made in a blog post earlier this week. link to “Stack Overflow users sabotage their posts after OpenAI deal”
🔗 linkblog: my thoughts on 'OpenAI, Mass Scraper of Copyrighted Work, Claims Copyright Over Subreddit's Logo'
- kudos:I don’t think intellectual property is the way to fight back against generative AI, but it is wildly out of line for a company who profits off using other’s intellectual property to be this petty. link to “OpenAI, Mass Scraper of Copyrighted Work, Claims Copyright Over Subreddit’s Logo”
Stack Exchange and digital labor
- kudos:Today, Stack Overflow announced that it was entering into a partnership with OpenAI to provide data from the former to the latter for the purposes of training ChatGPT, etc. I’ve used Stack Overflow a fair amount over the years, and there have also been times where I tried to get into some of the other Stack Exchange sites, contributing both questions and answers. I haven’t really been active on any of these sites in recent times, but I still decided to take a couple of minutes this afternoon and follow the advice of one outraged Mastodon post: delete my contributions and shut down my accounts.
🔗 linkblog: my thoughts on 'Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks'
- kudos:Hmm. Unsurprising but all the more frustrating for it. link to “Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks”
🔗 linkblog: my thoughts on 'The rise and fall of robots.txt'
- kudos:Fascinating read on web crawlers and robots.txt link to “The rise and fall of robots.txt”
🔗 linkblog: my thoughts on 'OpenAI went back on a promise to make key documents public | Ars Technica'
- kudos:If OpenAI is going to be an influential company, it would be nice for it to be more transparent. link to “OpenAI went back on a promise to make key documents public | Ars Technica”
🔗 linkblog: my thoughts on 'I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy - The Verge'
- kudos:Yeah, but don’t worry, this is definitely the only way that generative AI will be used to overwhelm us with useless content. link to “I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy - The Verge”
- kudos:
I have lots of concerns about LLM training, but I think it’s better to think of the issue in terms of digital labor, not copyright. My blog is licensed for reuse, but that doesn’t mean it’s any less exploitative for someone to scrape it all to develop software that will make them rich off my work.
🔗 linkblog: my thoughts on 'Pluralistic: The real AI fight (27 Nov 2023) – Pluralistic: Daily links from Cory Doctorow'
- kudos:I haven’t been following this debate, but Doctorow and White’s points resonate with me. link to “Pluralistic: The real AI fight (27 Nov 2023) – Pluralistic: Daily links from Cory Doctorow”
🔗 linkblog: my thoughts on 'An Iowa school district is using ChatGPT to decide which books to ban - The Verge'
- kudos:Even if ChatGPT could be trusted to do this task, “let’s remove books from libraries with less work” is a good example of how efficiency isn’t always a good thing. link to ‘An Iowa school district is using ChatGPT to decide which books to ban - The Verge’
🔗 linkblog: my thoughts on 'OpenAI wants GPT-4 to solve the content moderation dilemma - The Verge'
- kudos:Look, if an automated process could save human moderators from the awful work they have to do, I’d be all for it. I’m unconvinced that GPT-4 could do it, though. link to ‘OpenAI wants GPT-4 to solve the content moderation dilemma - The Verge’
- kudos:
I get why folks in ed compare ChatGPT to Wikipedia, but there are important differences. Wikipedia is entirely non-profit, lays bare its knowledge generation process, can be fixed on the fly, and can’t actively generate problematic content. It’s not just about reliability.
🔗 linkblog: my thoughts on 'AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian'
- kudos:So many important points in this piece. link to ‘AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian’
🔗 linkblog: my thoughts on 'Now you can block OpenAI’s web crawler - The Verge'
- kudos:This is a welcome step, but I’m concerned it’s an empty, distracting gesture—it certainly doesn’t solve the deeper issue. link to ‘Now you can block OpenAI’s web crawler - The Verge’
🔗 linkblog: my thoughts on 'Reddit Won’t Be the Same. Neither Will the Internet | WIRED'
- kudos:Good focus on the digital labor aspects of this whole thing. I sympathize with Reddit for not wanting to provide free value for generative AI (this is one of the trickiest parts of that conversation), but Reddit’s users are right to balk at providing free value for the platform. link to ‘Reddit Won’t Be the Same. Neither Will the Internet | WIRED’
🔗 linkblog: my thoughts on 'OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation - The Verge'
- kudos:Last paragraph here is an important one: I’ve seen a lot of headlines about OpenAI calling for regulation, but it’s noteworthy that it’s hypothetical future regulation. link to ‘OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation - The Verge’
🔗 linkblog: my thoughts on 'ChatGPT Is So Bad at Essays That Professors Can Spot It Instantly'
- kudos:Lots of helpful stuff in here. link to ‘ChatGPT Is So Bad at Essays That Professors Can Spot It Instantly’
🔗 linkblog: my thoughts on 'Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'
- kudos:This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem. link to ‘Elon Musk Is Reportedly Building ‘Based AI’ Because ChatGPT Is Too Woke’
🔗 linkblog: my thoughts on 'OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit'
- kudos:I don’t know enough about OpenAI to evaluate these concerns, but I think these questions are important. The power of AI means that the companies that control them are also in a position of power, and it’s important that we treat them critically. That said, while I do think making LLM code open source is probably better in the aggregate, it isn’t without concerning drawbacks: The minute it was released under an open license, I’m sure Gab’s Andrew Torba would be considering how to make a homebrew version that can’t be content moderated.
🔗 linkblog: my thoughts on 'As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge'
- kudos:Content moderation is hard, and moderating AI content definitely seems harder to me. However, so long as OpenAI has control over ChatGPT (and benefits from others’ use of it), I do think it has a responsibility to shape what it can produce. That said, there remains a deeper, legitimate question about how much influence a single company should have over LLM output. link to ‘As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge’
🔗 linkblog: my thoughts on 'ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica'
- kudos:Important points in here. link to ‘ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica’
🔗 linkblog: my thoughts on 'Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times'
- kudos:Important to keep an eye on this. link to ‘Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times’
🔗 linkblog: my thoughts on 'Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word'
- kudos:Of all the dumb responses to perfectly legitimate content moderation, this is perhaps the dumbest. link to ‘Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word’
🔗 linkblog: my thoughts on 'OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong? | Techdirt'
- kudos:Just because some worries about ChatGPT are, indeed, moral panics doesn’t mean that there aren’t legtimate criticisms of the technology—including from an educational perspective. I happen to agree with Masnick that schools ultimately need to roll with the punches here, but given how much we already expect of our schools and teachers, it’s reasonable to resent being punched in the first place. Masnick’s point about the error rate for detecting AI-generated text is an important one, though: I don’t think plagiarism-detecting surveillance is at all the right response.
🔗 linkblog: my thoughts on 'ChatGPT Is Passing the Tests Required for Medical Licenses and Business Degrees'
- kudos:Headline overstates things a bit, and I’m on team “change the assessments,” but it’s still worth asking if AI developers are appropriately anticipating the disruptions these tools are causing. link to ‘ChatGPT Is Passing the Tests Required for Medical Licenses and Business Degrees’
🔗 linkblog: my thoughts on 'OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time'
- kudos:Looks like the job of AI training is as awful as the job of content moderation. link to ‘OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time’
🔗 linkblog: my thoughts on 'ChatGPT is enabling script kiddies to write functional malware | Ars Technica'
- kudos:I’ve been making a real effort to be less pessimistic about ChatGPT, and I imagine this makes a better headline than actual threat, but this is still the sort of thing that makes me wonder about AI. What is missing from our world that ChatGPT fills? And is it worth these increased risks? link to ‘ChatGPT is enabling script kiddies to write functional malware | Ars Technica’
🔗 linkblog: my thoughts on 'New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge'
- kudos:Personally, I’m not very optimistic about ChatGPT, and I think OpenAI should have better considered disruptions to fields like education before releasing the tool. That said, I don’t think a ban is the solution here. link to ‘New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge’