🔗 linkblog: my thoughts on 'Pluralistic: The real AI fight (27 Nov 2023) – Pluralistic: Daily links from Cory Doctorow'
I haven’t been following this debate, but Doctorow and White’s points resonate with me. link to “Pluralistic: The real AI fight (27 Nov 2023) – Pluralistic: Daily links from Cory Doctorow”
🔗 linkblog: my thoughts on 'An Iowa school district is using ChatGPT to decide which books to ban - The Verge'
Even if ChatGPT could be trusted to do this task, “let’s remove books from libraries with less work” is a good example of how efficiency isn’t always a good thing. link to ‘An Iowa school district is using ChatGPT to decide which books to ban - The Verge’
Look, if an automated process could save human moderators from the awful work they have to do, I’d be all for it. I’m unconvinced that GPT-4 could do it, though. link to ‘OpenAI wants GPT-4 to solve the content moderation dilemma - The Verge’
🔗 linkblog: my thoughts on 'AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian'
So many important points in this piece. link to ‘AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian’
This is a welcome step, but I’m concerned it’s an empty, distracting gesture—it certainly doesn’t solve the deeper issue. link to ‘Now you can block OpenAI’s web crawler - The Verge’
Good focus on the digital labor aspects of this whole thing. I sympathize with Reddit for not wanting to provide free value for generative AI (this is one of the trickiest parts of that conversation), but Reddit’s users are right to balk at providing free value for the platform. link to ‘Reddit Won’t Be the Same. Neither Will the Internet | WIRED’
🔗 linkblog: my thoughts on 'OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation - The Verge'
Last paragraph here is an important one: I’ve seen a lot of headlines about OpenAI calling for regulation, but it’s noteworthy that it’s hypothetical future regulation. link to ‘OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation - The Verge’
Lots of helpful stuff in here. link to ‘ChatGPT Is So Bad at Essays That Professors Can Spot It Instantly’
🔗 linkblog: my thoughts on 'Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'
This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem. link to ‘Elon Musk Is Reportedly Building ‘Based AI’ Because ChatGPT Is Too Woke’
🔗 linkblog: my thoughts on 'OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit'
I don’t know enough about OpenAI to evaluate these concerns, but I think these questions are important. The power of AI means that the companies that control them are also in a position of power, and it’s important that we treat them critically. That said, while I do think making LLM code open source is probably better in the aggregate, it isn’t without concerning drawbacks: The minute it was released under an open license, I’m sure Gab’s Andrew Torba would be considering how to make a homebrew version that can’t be content moderated.
🔗 linkblog: my thoughts on 'As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge'
Content moderation is hard, and moderating AI content definitely seems harder to me. However, so long as OpenAI has control over ChatGPT (and benefits from others’ use of it), I do think it has a responsibility to shape what it can produce. That said, there remains a deeper, legitimate question about how much influence a single company should have over LLM output. link to ‘As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge’
🔗 linkblog: my thoughts on 'ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica'
Important points in here. link to ‘ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica’
🔗 linkblog: my thoughts on 'Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times'
Important to keep an eye on this. link to ‘Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times’
Of all the dumb responses to perfectly legitimate content moderation, this is perhaps the dumbest. link to ‘Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word’
🔗 linkblog: my thoughts on 'OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong? | Techdirt'
Just because some worries about ChatGPT are, indeed, moral panics doesn’t mean that there aren’t legtimate criticisms of the technology—including from an educational perspective. I happen to agree with Masnick that schools ultimately need to roll with the punches here, but given how much we already expect of our schools and teachers, it’s reasonable to resent being punched in the first place. Masnick’s point about the error rate for detecting AI-generated text is an important one, though: I don’t think plagiarism-detecting surveillance is at all the right response.
🔗 linkblog: my thoughts on 'ChatGPT Is Passing the Tests Required for Medical Licenses and Business Degrees'
Headline overstates things a bit, and I’m on team “change the assessments,” but it’s still worth asking if AI developers are appropriately anticipating the disruptions these tools are causing. link to ‘ChatGPT Is Passing the Tests Required for Medical Licenses and Business Degrees’
Looks like the job of AI training is as awful as the job of content moderation. link to ‘OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time’
🔗 linkblog: my thoughts on 'ChatGPT is enabling script kiddies to write functional malware | Ars Technica'
I’ve been making a real effort to be less pessimistic about ChatGPT, and I imagine this makes a better headline than actual threat, but this is still the sort of thing that makes me wonder about AI. What is missing from our world that ChatGPT fills? And is it worth these increased risks? link to ‘ChatGPT is enabling script kiddies to write functional malware | Ars Technica’
🔗 linkblog: my thoughts on 'New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge'
Personally, I’m not very optimistic about ChatGPT, and I think OpenAI should have better considered disruptions to fields like education before releasing the tool. That said, I don’t think a ban is the solution here. link to ‘New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge’