Below are posts associated with the “ChatGPT” tag.
🔗 linkblog: Certain names make ChatGPT grind to a halt, and we know why'
Interesting stuff here. I think most complaints about OpenAI “censorship” are hogwash, but it’s still fascinating—and worrying—to see how much control the company exercises over its product.
🔗 linkblog: ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It'
This is a darker version of some of the thoughts I had when I first heard about the “PhD comparison.”
Before you click through to the article, I also want to use this short post as a complaint that I don’t think “intelligence” is a thing—and that PhDs certainly wouldn’t be a measure of it if it were.
🔗 linkblog: OpenAI launches programs making ChatGPT cheaper for schools and nonprofits'
Oh, please no no no. I usually read a whole article before posting it, but just the first few paragraphs are giving me such a visceral reaction that I don’t know if I’ll make it through the rest. The existing tech giants already have such a hold on us, let’s please not let OpenAI in the door.
Stack Exchange and digital labor
Today, Stack Overflow announced that it was entering into a partnership with OpenAI to provide data from the former to the latter for the purposes of training ChatGPT, etc. I’ve used Stack Overflow a fair amount over the years, and there have also been times where I tried to get into some of the other Stack Exchange sites, contributing both questions and answers. I haven’t really been active on any of these sites in recent times, but I still decided to take a couple of minutes this afternoon and follow the advice of one outraged Mastodon post: delete my contributions and shut down my accounts.
🔗 linkblog: Scammers Used ChatGPT to Unleash a Crypto Botnet on X | WIRED'
Three cheers for ChatGPT or whatever.
🔗 linkblog: An Iowa school district is using ChatGPT to decide which books to ban - The Verge'
Even if ChatGPT could be trusted to do this task, “let’s remove books from libraries with less work” is a good example of how efficiency isn’t always a good thing.
🔗 linkblog: Cleaning Up ChatGPT’s Language Takes Heavy Toll on Human Workers - WSJ'
Everyone excited about generative AI needs to account for this kind of thing. We don’t pay enough attention to digital labor and the dehumanizing aspects of content moderation.
draft syllabus statement on code, plagiarism, and generative AI
I’m spending a chunk of today starting on revisions to my Intro to Data Science course for my unit’s LIS and ICT graduate prograrms. I’d expected to spend most of the time shuffling around the content and assessment for particular weeks, but I quickly realized that I was going to need to update what I had to say in the syllabus about plagiarism and academic offenses. Last year’s offering of the course involved a case of potential plagiarism, so I wanted to include more explicit instruction that encourages students to borrow code while making it clear that there are right and wrong ways of doing so.
🔗 linkblog: The Fanfic Sex Trope That Caught a Plundering AI Red-Handed | WIRED'
This is a wild, compelling story that I missed when it first came out. Glad to be reading it now.
🔗 linkblog: Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'
This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem.
🔗 linkblog: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit'
I don’t know enough about OpenAI to evaluate these concerns, but I think these questions are important. The power of AI means that the companies that control them are also in a position of power, and it’s important that we treat them critically. That said, while I do think making LLM code open source is probably better in the aggregate, it isn’t without concerning drawbacks: The minute it was released under an open license, I’m sure Gab’s Andrew Torba would be considering how to make a homebrew version that can’t be content moderated.
🔗 linkblog: As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge'
Content moderation is hard, and moderating AI content definitely seems harder to me. However, so long as OpenAI has control over ChatGPT (and benefits from others’ use of it), I do think it has a responsibility to shape what it can produce. That said, there remains a deeper, legitimate question about how much influence a single company should have over LLM output.
🔗 linkblog: ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica'
Important points in here.
🔗 linkblog: Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times'
Important to keep an eye on this.
🔗 linkblog: Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word'
Of all the dumb responses to perfectly legitimate content moderation, this is perhaps the dumbest.
🔗 linkblog: OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong? | Techdirt'
Just because some worries about ChatGPT are, indeed, moral panics doesn’t mean that there aren’t legtimate criticisms of the technology—including from an educational perspective. I happen to agree with Masnick that schools ultimately need to roll with the punches here, but given how much we already expect of our schools and teachers, it’s reasonable to resent being punched in the first place. Masnick’s point about the error rate for detecting AI-generated text is an important one, though: I don’t think plagiarism-detecting surveillance is at all the right response.
🔗 linkblog: OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time'
Looks like the job of AI training is as awful as the job of content moderation.
🔗 linkblog: ChatGPT is enabling script kiddies to write functional malware | Ars Technica'
I’ve been making a real effort to be less pessimistic about ChatGPT, and I imagine this makes a better headline than actual threat, but this is still the sort of thing that makes me wonder about AI. What is missing from our world that ChatGPT fills? And is it worth these increased risks?
🔗 linkblog: A CompSci Student Built an App That Can Detect ChatGPT-Generated Text'
See, as worried as I am about ChatGPT use in education, this actually worries me more, because it’s basically plagiarism detection, which I’m opposed to.
🔗 linkblog: New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge'
Personally, I’m not very optimistic about ChatGPT, and I think OpenAI should have better considered disruptions to fields like education before releasing the tool. That said, I don’t think a ban is the solution here.
🔗 linkblog: Experts Warn ChatGPT Could Democratize Cybercrime - Infosecurity Magazine'
Well, this is terrifying.
🔗 linkblog: ChatGPT, Galactica, and the Progress Trap | WIRED'
A helpful and thoughtful critique of how people are doing AI text generation.