- kudos:

This week has enough writing (and deadlines!) that the utilitarian appeal of ChatGPT is finally clear to me; and yet, it’s also so much clearer that I would rather do fewer things well and on my own.

🔗 linkblog: my thoughts on 'Scammers Used ChatGPT to Unleash a Crypto Botnet on X | WIRED'

- kudos:

Three cheers for ChatGPT or whatever. link to ‘Scammers Used ChatGPT to Unleash a Crypto Botnet on X | WIRED’

🔗 linkblog: my thoughts on 'An Iowa school district is using ChatGPT to decide which books to ban - The Verge'

- kudos:

Even if ChatGPT could be trusted to do this task, “let’s remove books from libraries with less work” is a good example of how efficiency isn’t always a good thing. link to ‘An Iowa school district is using ChatGPT to decide which books to ban - The Verge’

- kudos:

I get why folks in ed compare ChatGPT to Wikipedia, but there are important differences. Wikipedia is entirely non-profit, lays bare its knowledge generation process, can be fixed on the fly, and can’t actively generate problematic content. It’s not just about reliability.

🔗 linkblog: my thoughts on 'Cleaning Up ChatGPT’s Language Takes Heavy Toll on Human Workers - WSJ'

- kudos:

Everyone excited about generative AI needs to account for this kind of thing. We don’t pay enough attention to digital labor and the dehumanizing aspects of content moderation. link to ‘Cleaning Up ChatGPT’s Language Takes Heavy Toll on Human Workers - WSJ’

draft syllabus statement on code, plagiarism, and generative AI

- kudos:

I’m spending a chunk of today starting on revisions to my Intro to Data Science course for my unit’s LIS and ICT graduate prograrms. I’d expected to spend most of the time shuffling around the content and assessment for particular weeks, but I quickly realized that I was going to need to update what I had to say in the syllabus about plagiarism and academic offenses. Last year’s offering of the course involved a case of potential plagiarism, so I wanted to include more explicit instruction that encourages students to borrow code while making it clear that there are right and wrong ways of doing so.

🔗 linkblog: my thoughts on 'The Fanfic Sex Trope That Caught a Plundering AI Red-Handed | WIRED'

- kudos:

This is a wild, compelling story that I missed when it first came out. Glad to be reading it now. link to ‘The Fanfic Sex Trope That Caught a Plundering AI Red-Handed | WIRED’

🔗 linkblog: my thoughts on 'Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'

- kudos:

This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem. link to ‘Elon Musk Is Reportedly Building ‘Based AI’ Because ChatGPT Is Too Woke’

🔗 linkblog: my thoughts on 'OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit'

- kudos:

I don’t know enough about OpenAI to evaluate these concerns, but I think these questions are important. The power of AI means that the companies that control them are also in a position of power, and it’s important that we treat them critically. That said, while I do think making LLM code open source is probably better in the aggregate, it isn’t without concerning drawbacks: The minute it was released under an open license, I’m sure Gab’s Andrew Torba would be considering how to make a homebrew version that can’t be content moderated.

🔗 linkblog: my thoughts on 'As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge'

- kudos:

Content moderation is hard, and moderating AI content definitely seems harder to me. However, so long as OpenAI has control over ChatGPT (and benefits from others’ use of it), I do think it has a responsibility to shape what it can produce. That said, there remains a deeper, legitimate question about how much influence a single company should have over LLM output. link to ‘As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge’

🔗 linkblog: my thoughts on 'ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica'

- kudos:

Important points in here. link to ‘ChatGPT is a data privacy nightmare, and we ought to be concerned | Ars Technica’

🔗 linkblog: my thoughts on 'Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times'

- kudos:

Important to keep an eye on this. link to ‘Disinformation Researchers Raise Alarms About A.I. Chatbots - The New York Times’

🔗 linkblog: my thoughts on 'Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word'

- kudos:

Of all the dumb responses to perfectly legitimate content moderation, this is perhaps the dumbest. link to ‘Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word’

🔗 linkblog: my thoughts on 'OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong? | Techdirt'

- kudos:

Just because some worries about ChatGPT are, indeed, moral panics doesn’t mean that there aren’t legtimate criticisms of the technology—including from an educational perspective. I happen to agree with Masnick that schools ultimately need to roll with the punches here, but given how much we already expect of our schools and teachers, it’s reasonable to resent being punched in the first place. Masnick’s point about the error rate for detecting AI-generated text is an important one, though: I don’t think plagiarism-detecting surveillance is at all the right response.

🔗 linkblog: my thoughts on 'OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time'

- kudos:

Looks like the job of AI training is as awful as the job of content moderation. link to ‘OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time’

- kudos:

I’ve seen jokes about the supposed irony of having to fill out a CAPTCHA to use ChatGPT, but it’s actually pretty consistent: The purpose of CAPTCHA is also to mine the fruits of human labor to train ML/AI that can replace human labor.

🔗 linkblog: my thoughts on 'ChatGPT is enabling script kiddies to write functional malware | Ars Technica'

- kudos:

I’ve been making a real effort to be less pessimistic about ChatGPT, and I imagine this makes a better headline than actual threat, but this is still the sort of thing that makes me wonder about AI. What is missing from our world that ChatGPT fills? And is it worth these increased risks? link to ‘ChatGPT is enabling script kiddies to write functional malware | Ars Technica’

🔗 linkblog: my thoughts on 'A CompSci Student Built an App That Can Detect ChatGPT-Generated Text'

- kudos:

See, as worried as I am about ChatGPT use in education, this actually worries me more, because it’s basically plagiarism detection, which I’m opposed to. link to ‘A CompSci Student Built an App That Can Detect ChatGPT-Generated Text’

🔗 linkblog: my thoughts on 'New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge'

- kudos:

Personally, I’m not very optimistic about ChatGPT, and I think OpenAI should have better considered disruptions to fields like education before releasing the tool. That said, I don’t think a ban is the solution here. link to ‘New York City schools ban access to ChatGPT over fears of cheating and misinformation - The Verge’

🔗 linkblog: my thoughts on 'Experts Warn ChatGPT Could Democratize Cybercrime - Infosecurity Magazine'

- kudos:

Well, this is terrifying. link to ‘Experts Warn ChatGPT Could Democratize Cybercrime - Infosecurity Magazine’

🔗 linkblog: my thoughts on 'ChatGPT, Galactica, and the Progress Trap | WIRED'

- kudos:

A helpful and thoughtful critique of how people are doing AI text generation. link to ‘ChatGPT, Galactica, and the Progress Trap | WIRED’