Below are posts associated with the “generative AI” tag.
🔗 linkblog: AI Dungeon Master experiment exposes the vulnerability of Critical Role’s fandom'
Digital labor issues abound in the context of generative AI, but fan labor issues make me particularly angry.
🔗 linkblog: Corporations as Paperclip Maximizers: AI, Data, and the Future of Learning | Punya Mishra's Web'
There are some helpful thoughts in here. I think most of my concerns about generative AI are less about the technology itself and more about the corporate interest in and control of it.
🔗 linkblog: Evolution journal editors resign en masse'
More suckiness in the world of academic publishing.
🔗 linkblog: TCL TVs will use films made with generative AI to push targeted ads'
Well put:
TCL plans to get more into original content, fueled by a dystopian strategy that seems largely built around minimizing costs and pushing ads.
🔗 linkblog: I Went to the Premiere of the First Commercially Streaming AI-Generated Movies'
This is a solid article. I think the opening is reflective and that there’s an effort to be open minded (more than I would be). It’s also amazing to me, though, how explicitly the goal here seems to be profiting from a surveillance-supported content mill.
trapped between generative AI and student surveillance
We’re getting to the end of the semester here at the University of Kentucky, which is my traditional time to get overly introspective about grading. There’s a lot on my mind at the end of this semester, but one thing that has popped into my head tonight and that I think will be quick to write about is a dilemma that I’m facing this semester, when I’ve had faced more suspicions about student use of generative AI than in any previous semester. By way of context, my class policy is to: 1) discourage student use of generative AI, but 2) begrudgingly allow students to use it, but 3) require that they disclose its use.
🔗 linkblog: Bluesky, AI, and the battle for consent on the open web'
Lots of interesting reflections here.
🔗 linkblog: Inside Bluesky’s big growth surge'
Lots of interesting stuff in here, including the difficulty of content moderation, and yet another way that generative AI is screwing everything up.
🔗 linkblog: More academic publishers are doing AI deals'
I keep thinking about the similarity of exploitation of academic labor by publishers to the exploitation of everyone’s labor by AI companies, and stories like this just make it more clear.
🔗 linkblog: How Memphis became a battleground over Elon Musk’s xAI supercomputer'
Who benefits from AI? Who doesn’t?
🔗 linkblog: AI Checkers Forcing Kids To Write Like A Robot To Avoid Being Called A Robot'
I am way more pessimistic about AI than Masnick is, but we agree on this sort of thing. Algorithmic surveillance is no more appropriate in response to AI concerns than it is to cheating concerns.
generative AI and the Honorable Harvest
I come from settler colonial stock and, more specifically, from a religious tradition that was (and still is!) pretty keen on imposing a particular identity on Indigenous peoples. I am the kind of person who really ought to be reading more Indigenous perspectives, but I’m also cautious about promoting those perspectives in my writing, lest I rely on a superficial, misguided understanding and then pat myself on the back for the great job I’m doing.
🔗 linkblog: CAPTCHAs Becoming Useless as AI Gets Smarter, Scientists Warn'
One thing this article misses is how often CAPTCHA has been used to train AI. It’s always been playing both sides against each other.
🔗 linkblog: Ex-Google CEO says successful AI startups can steal IP and hire lawyers to ‘clean up the mess’'
What reckless hubris. As I wrote earlier today, I’m in favor of more liberal IP law, but not so that businesses can swallow up content to profit from it.
🔗 linkblog: AI brings soaring emissions for Google and Microsoft, a major contributor to climate change'
This sucks so much—and encapsulates our world’s obsession with financial success over environmental health.
🔗 linkblog: AI means Google's greenhouse gas emissions up 48% in 5 years'
If AI is indeed going to help us reduce emissions, it seems to me that that will be the product of targeted, scientific and industrial use of AI, not shoving AI into a load of commercial products. Are these commercial companies using AI to figure out how to reduce emissions? If not (and maybe even if so), it seems disingenuous to express optimism that their increased energy use will be magically cancelled out by someone else.
🔗 linkblog: ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It'
This is a darker version of some of the thoughts I had when I first heard about the “PhD comparison.”
Before you click through to the article, I also want to use this short post as a complaint that I don’t think “intelligence” is a thing—and that PhDs certainly wouldn’t be a measure of it if it were.
🔗 linkblog: On What We Lose: Chai, AI and Nostalgia | Punya Mishra's Web'
I appreciate Punya’s essay here. I’m very grumpy about generative AI, but that doesn’t change the fact that some grumpiness has more to do with moral panic than a reasoned response—but THAT doesn’t mean that there isn’t room for some of this kind of careful nostalgia that Punya is sharing.
🔗 linkblog: AI Detectors Get It Wrong. Writers Are Being Fired Anyway'
Generative AI suuuucks, but AI detection software may suck even more.
🔗 linkblog: Apple’s new custom emoji come with climate costs'
I am very grumpy about this. Also, the point of emoji is that they exist within Unicode, yeah? So these aren’t really emoji in the way that those icons are useful—they’re just a fun trick that’s helping advance the climate crisis.