Below are posts associated with the “generative Ai” tag.
🔗 linkblog: Inside Bluesky’s big growth surge'
Lots of interesting stuff in here, including the difficulty of content moderation, and yet another way that generative AI is screwing everything up.
🔗 linkblog: More academic publishers are doing AI deals'
I keep thinking about the similarity of exploitation of academic labor by publishers to the exploitation of everyone’s labor by AI companies, and stories like this just make it more clear.
🔗 linkblog: How Memphis became a battleground over Elon Musk’s xAI supercomputer'
Who benefits from AI? Who doesn’t?
🔗 linkblog: AI Checkers Forcing Kids To Write Like A Robot To Avoid Being Called A Robot'
I am way more pessimistic about AI than Masnick is, but we agree on this sort of thing. Algorithmic surveillance is no more appropriate in response to AI concerns than it is to cheating concerns.
generative AI and the Honorable Harvest
I come from settler colonial stock and, more specifically, from a religious tradition that was (and still is!) pretty keen on imposing a particular identity on Indigenous peoples. I am the kind of person who really ought to be reading more Indigenous perspectives, but I’m also cautious about promoting those perspectives in my writing, lest I rely on a superficial, misguided understanding and then pat myself on the back for the great job I’m doing.
🔗 linkblog: CAPTCHAs Becoming Useless as AI Gets Smarter, Scientists Warn'
One thing this article misses is how often CAPTCHA has been used to train AI. It’s always been playing both sides against each other.
🔗 linkblog: Ex-Google CEO says successful AI startups can steal IP and hire lawyers to ‘clean up the mess’'
What reckless hubris. As I wrote earlier today, I’m in favor of more liberal IP law, but not so that businesses can swallow up content to profit from it.
🔗 linkblog: AI brings soaring emissions for Google and Microsoft, a major contributor to climate change'
This sucks so much—and encapsulates our world’s obsession with financial success over environmental health.
🔗 linkblog: AI means Google's greenhouse gas emissions up 48% in 5 years'
If AI is indeed going to help us reduce emissions, it seems to me that that will be the product of targeted, scientific and industrial use of AI, not shoving AI into a load of commercial products. Are these commercial companies using AI to figure out how to reduce emissions? If not (and maybe even if so), it seems disingenuous to express optimism that their increased energy use will be magically cancelled out by someone else.
🔗 linkblog: ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It'
This is a darker version of some of the thoughts I had when I first heard about the “PhD comparison.”
Before you click through to the article, I also want to use this short post as a complaint that I don’t think “intelligence” is a thing—and that PhDs certainly wouldn’t be a measure of it if it were.
🔗 linkblog: On What We Lose: Chai, AI and Nostalgia | Punya Mishra's Web'
I appreciate Punya’s essay here. I’m very grumpy about generative AI, but that doesn’t change the fact that some grumpiness has more to do with moral panic than a reasoned response—but THAT doesn’t mean that there isn’t room for some of this kind of careful nostalgia that Punya is sharing.
🔗 linkblog: AI Detectors Get It Wrong. Writers Are Being Fired Anyway'
Generative AI suuuucks, but AI detection software may suck even more.
🔗 linkblog: Apple’s new custom emoji come with climate costs'
I am very grumpy about this. Also, the point of emoji is that they exist within Unicode, yeah? So these aren’t really emoji in the way that those icons are useful—they’re just a fun trick that’s helping advance the climate crisis.
🔗 linkblog: Apple WWDC 2024: the 13 biggest announcements'
I’ve been feeling for a while like I need to move away from Apple eventually, but I’m so entangled in the ecosystem that I’m dragging my feet on it. Seeing the company drink the AI Kool-Aid is definitely accelerating my plans—and will even more so if there’s no easy way to turn these featutes off.
🔗 linkblog: Journalists “deeply troubled” by OpenAI’s content deals with Vox, The Atlantic'
In a roundabout way, I think this helps demonstrate why scraping data for generative AI isn’t a question of copyright. Even when there is a legal agreement, it can still be exploitative—it’s a question of digital labor.
🔗 linkblog: Decentralized Systems Will Be Necessary To Stop Google From Putting The Web Into Managed Decline'
Some good thoughts here by Masnick.
🔗 linkblog: OpenAI loses its voice'
Look, it shouldn’t take this story for people to realize that OpenAI exploits others’ contributions to make its products, but if it does the trick, I’ll take it. (And this is admittedly creepier than its base-level exploitation.)
🔗 linkblog: Pluralistic: You were promised a jetpack by liars (17 May 2024) – Pluralistic: Daily links from Cory Doctorow'
Compelling essay about vain hopes for the future.
🔗 linkblog: Microsoft’s AI obsession is jeopardizing its climate ambitions'
Such a depressing article.
🔗 linkblog: Stack Overflow users sabotage their posts after OpenAI deal'
Some better, broader coverage of complaints I made in a blog post earlier this week.
🔗 linkblog: OpenAI, Mass Scraper of Copyrighted Work, Claims Copyright Over Subreddit's Logo'
I don’t think intellectual property is the way to fight back against generative AI, but it is wildly out of line for a company who profits off using other’s intellectual property to be this petty.