Below you will find pages that utilize the taxonomy term “AI”
🔗 linkblog: my thoughts on 'In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT | WIRED'
I am not an AI expert, and my concerns aren’t on the existential scale. However, I do think it’s important to avoid moving fast and breaking things with these powerful technologies. That isn’t necessarily to say that more powerful AI shouldn’t be released (though I’m already disinterested by the current stuff), just that racing to improve them for commercial benefit and as technological flourish doesn’t strike me as socially responsible. link to ‘In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT | WIRED’
🔗 linkblog: my thoughts on 'ChatGPT Is So Bad at Essays That Professors Can Spot It Instantly'
Lots of helpful stuff in here. link to ‘ChatGPT Is So Bad at Essays That Professors Can Spot It Instantly’
🔗 linkblog: my thoughts on 'Paizo bans AI-generated content to support ‘human professionals’ - The Verge'
Very interesting! I know some critics will describe this as a morally panicked response, but I disagree. I think it’s smart to ask how AI will affect human creators and for companies/communities like Paizo to take principled stances. link to ‘Paizo bans AI-generated content to support ‘human professionals’ - The Verge’
🔗 linkblog: my thoughts on 'Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'
This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem. link to ‘Elon Musk Is Reportedly Building ‘Based AI’ Because ChatGPT Is Too Woke’
📚 bookblog: ❤️❤️❤️❤️❤️ for Illuminae, by Amie Kaufman and Jay Kristoff
This is my third time reading this book—I couldn’t resist coming back to it for the “epistolary novel” square of my library’s “Books and Bites Bingo” challenge this year. The print book is amazing, the audiobook manages to adapt a book that shouldn’t be adaptable, and I enjoyed this read as much as the last two. The language and worldbuilding are subtle but effective, it’s morally complex without trying too hard to be, and the characters are a good mix between believable and, well, archetypal characters in a YA novel.
🔗 linkblog: my thoughts on 'Voice Actors Push Back Against Their Voices Being Used by AI'
Interesting and important read. link to ‘Voice Actors Push Back Against Their Voices Being Used by AI’
🔗 linkblog: my thoughts on '4chan users embrace AI voice clone tool to generate celebrity hatespeech - The Verge'
Why… why don’t we better anticipate better misuses like this? Are technological “progress” and market opportunities more important than these side effects? link to ‘4chan users embrace AI voice clone tool to generate celebrity hatespeech - The Verge’
🔗 linkblog: my thoughts on 'CNET Defends Use of AI Blogger After Embarrassing 163-Word Correction: ‘Humans Make Mistakes, Too’'
Here, as with autocorrect and citation managers, my personal opinion is that any human who knows enough to use the tool critically knows enough to do the job themself. Maybe slower, sure, but slower isn’t always bad. link to ‘CNET Defends Use of AI Blogger After Embarrassing 163-Word Correction: ‘Humans Make Mistakes, Too’’
🔗 linkblog: my thoughts on 'How ‘radioactive data’ could help reveal malicious AIs - The Verge'
Fascinating read on potential threats posed by AI—and potential solutions. link to ‘How ‘radioactive data’ could help reveal malicious AIs - The Verge’
🔗 linkblog: my thoughts on 'Experts Warn ChatGPT Could Democratize Cybercrime - Infosecurity Magazine'
Well, this is terrifying.
link to ‘Experts Warn ChatGPT Could Democratize Cybercrime - Infosecurity Magazine’
🔗 linkblog: my thoughts on 'Thanks to AI, it’s probably time to take your photos off the Internet | Ars Technica'
Good thing engineers really anticipated and considered these consequences before developing this software, right?
link to ‘Thanks to AI, it’s probably time to take your photos off the Internet | Ars Technica’
🔗 linkblog: my thoughts on 'ChatGPT, Galactica, and the Progress Trap | WIRED'
A helpful and thoughtful critique of how people are doing AI text generation.
link to ‘ChatGPT, Galactica, and the Progress Trap | WIRED’
🔗 linkblog: my thoughts on 'Facebook Pulls Its New ‘AI For Science’ Because It’s Broken and Terrible'
Very interesting read.
link to ‘Facebook Pulls Its New ‘AI For Science’ Because It’s Broken and Terrible’
🔗 linkblog: my thoughts on 'Students Are Using AI to Write Their Papers, Because Of Course They Are'
Really important story here, and glad to see George Veletsianos quoted. I’ve long been an advocate for developing assessments that are impossible to cheat at, but I don’t know if that’s the entire (or even a practical) response to GPT-3. We are continuing to develop technologies whose societal effects we are not prepares for.
link to ‘Students Are Using AI to Write Their Papers, Because Of Course They Are’
🔗 linkblog: my thoughts on 'AI Is Probably Using Your Images and It's Not Easy to Opt Out'
Ooof. AI-generated art is fun, but it comes at a price, and we can’t afford to forget it.
link to ‘AI Is Probably Using Your Images and It’s Not Easy to Opt Out’
🔗 linkblog: my thoughts on 'The Tech We Won’t Build — The Internet Health Report 2022'
Compelling podcast episode from Mozilla highlighting morally dubious uses of AI. It’s really important that we be more reflective about this instead of trying things and seeing where they lead.
link to ‘The Tech We Won’t Build — The Internet Health Report 2022’