Below you will find pages that utilize the taxonomy term “generative AI”
🔗 linkblog: my thoughts on 'Tumblr and Wordpress to Sell Users’ Data to Train AI Tools'
- kudos:Aw, geez, and I liked Automattic, too. I get that financing Tumblr is hard, but why this? link to “Tumblr and Wordpress to Sell Users’ Data to Train AI Tools”
- kudos:
This week has enough writing (and deadlines!) that the utilitarian appeal of ChatGPT is finally clear to me; and yet, it’s also so much clearer that I would rather do fewer things well and on my own.
🔗 linkblog: my thoughts on 'Reddit: 'We Are in the Early Stages of Monetizing Our User Base''
- kudos:There are few phrases grosser than “monetizing our user base.” link to “Reddit: ‘We Are in the Early Stages of Monetizing Our User Base’”
🔗 linkblog: my thoughts on 'Reddit Signs $60 Million Deal to Scrape Your Online Community for AI Parts: Report'
- kudos:Look, I’ve never been really into Reddit, but I’m still really disappointed in the company. This sucks. link to “Reddit Signs $60 Million Deal to Scrape Your Online Community for AI Parts: Report”
🔗 linkblog: my thoughts on 'Reddit sells training data to unnamed AI company ahead of IPO'
- kudos:C’mon, Reddit. link to “Reddit sells training data to unnamed AI company ahead of IPO”
🔗 linkblog: my thoughts on 'University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI'
- kudos:This is straight-up awful. Shame on the university for doing this. link to “University of Michigan Sells Recordings of Study Groups and Office Hours to Train AI”
🔗 linkblog: my thoughts on 'Future data centres may have built-in nuclear reactors'
- kudos:You know, instead of assuming that we must grow AI data centers and asking how we should power them, we could look at the costs in terms of power and ask whether we should grow AI data centers. link to “Future data centres may have built-in nuclear reactors”
🔗 linkblog: my thoughts on 'Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks'
- kudos:Hmm. Unsurprising but all the more frustrating for it. link to “Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks”
🔗 linkblog: my thoughts on 'The rise and fall of robots.txt'
- kudos:Fascinating read on web crawlers and robots.txt link to “The rise and fall of robots.txt”
🔗 linkblog: my thoughts on 'Pluralistic: How I got scammed (05 Feb 2024) – Pluralistic: Daily links from Cory Doctorow'
- kudos:Fascinating post. Grateful for Doctorow’s honesty at his being scammed and interested in the idea that lowering quality of services through AI trains us to accept fraud. link to “Pluralistic: How I got scammed (05 Feb 2024) – Pluralistic: Daily links from Cory Doctorow”
🔗 linkblog: my thoughts on 'The Absurd One-Sidedness of the Ethics of AI Debate: A rant | Punya Mishra's Web'
- kudos:Punya is a bit warmer on AI than I am, so I wasn’t sure what I would be reading based off of the title, but this is one of the best things I’ve read on generative AI in education. These companies have so much power and could use a little more Parkerian responsibility. link to “The Absurd One-Sidedness of the Ethics of AI Debate: A rant | Punya Mishra’s Web”
🔗 linkblog: my thoughts on 'Generative AI course statement – George Veletsianos, PhD'
- kudos:George’s example statement is one worth bookmarking. link to “Generative AI course statement – George Veletsianos, PhD”
🔗 linkblog: my thoughts on 'The Taylor Swift deepfakes are a warning'
- kudos:Good thoughts from Newton here. “Who could have predicted this?” indeed. link to “The Taylor Swift deepfakes are a warning”
🔗 linkblog: my thoughts on 'X is being flooded with graphic Taylor Swift AI images - The Verge'
- kudos:I don’t get what’s missing from a world without generative AI—and examples like this don’t make me any more convinced. link to “X is being flooded with graphic Taylor Swift AI images - The Verge”
🔗 linkblog: my thoughts on 'OpenAI went back on a promise to make key documents public | Ars Technica'
- kudos:If OpenAI is going to be an influential company, it would be nice for it to be more transparent. link to “OpenAI went back on a promise to make key documents public | Ars Technica”
🔗 linkblog: my thoughts on 'Cat and Girl'
- kudos:Generative AI has a digital labor issue, and we aren’t paying enough attention to it. link to “Cat and Girl”
🔗 linkblog: my thoughts on 'AI to hit 40% of jobs and worsen inequality, IMF says'
- kudos:Even if AI would be beneficial for humanity in the aggregate, it’s important to ask how that benefit would be distributed. link to “AI to hit 40% of jobs and worsen inequality, IMF says”
🔗 linkblog: my thoughts on 'Plagiarism is the latest weapon in the culture wars. But what even is it? - Vox'
- kudos:Lots of interesting comments in this article. I haven’t been following this story as closely as I should, but it—and articles like this—are making me think that I need to think harder about plagiarism: what it is and how I should respond to it. link to “Plagiarism is the latest weapon in the culture wars. But what even is it? - Vox”
🔗 linkblog: my thoughts on 'I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy - The Verge'
- kudos:Yeah, but don’t worry, this is definitely the only way that generative AI will be used to overwhelm us with useless content. link to “I’m sorry, but I cannot fulfill this request as it goes against OpenAI use policy - The Verge”
- kudos:
I have lots of concerns about LLM training, but I think it’s better to think of the issue in terms of digital labor, not copyright. My blog is licensed for reuse, but that doesn’t mean it’s any less exploitative for someone to scrape it all to develop software that will make them rich off my work.
🔗 linkblog: my thoughts on 'Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos | Ars Technica'
- kudos:The phrase popped into my head before the article could even get to it: We are the product. link to “Meta’s new AI image generator was trained on 1.1 billion Instagram and Facebook photos | Ars Technica”
🔗 linkblog: my thoughts on 'AI chatbots can infer an alarming amount of info about you from your responses | Ars Technica'
- kudos:Welp, this is scary. link to “AI chatbots can infer an alarming amount of info about you from your responses | Ars Technica”
🔗 linkblog: mes pensées sur 'Les deepfakes pornographiques comme nouvelle arme de harcèlement scolaire - rts.ch - Technologies'
- kudos:C’est bien inquiétant, cette histoire. lien pour “Les deepfakes pornographiques comme nouvelle arme de harcèlement scolaire - rts.ch - Technologies”
🔗 linkblog: my thoughts on '4chan users manipulate AI tools to unleash torrent of racist images | Ars Technica'
- kudos:Content moderation is a good thing. link to “4chan users manipulate AI tools to unleash torrent of racist images | Ars Technica”
- kudos:
Another set of proofs, another set of complaints about a copyeditor making changes to my writing in ways that distort my meaning. If I get grumpy about a human doing my writing for me, why would I ever want generative AI to do it?
🔗 linkblog: my thoughts on 'Terraforming Mars team defends AI use as Kickstarter hits $1.3 million - Polygon'
- kudos:This is an interesting interview. I don’t think I’m sold on the defense—if anyone can afford to pay artists, the team behind Terraforming Mars can—but I do see how there’s more nuance here than my gut reaction to the headline. Still not pleased, though. link to ‘Terraforming Mars team defends AI use as Kickstarter hits $1.3 million - Polygon’
🔗 linkblog: my thoughts on 'https://pluralistic.net/2023/09/07/govern-yourself-accordingly/'
- kudos:Appreciate Doctorow’s thinking here. link to ‘https://pluralistic.net/2023/09/07/govern-yourself-accordingly/’
- kudos:
College conversation about investment in GPT-type tech to support research is continuing. I think it’s… fitting that the survey being circulated is clearly using Qualtrics’s auto-suggested Likert responses—and that the responses aren’t quite right for the questions being asked.
- kudos:
My college is floating the idea of investing in GPT-type technology to help researchers code text data. This reminds me of my longtime belief that the distinction between “qual” and “quant” is often less important than the distinction between different research paradigms.
🔗 linkblog: my thoughts on 'Gizmodo’s owner shuts down Spanish language site in favor of AI translations - The Verge'
- kudos:Gizmodo’s owner seems way too optimistic about AI. link to ‘Gizmodo’s owner shuts down Spanish language site in favor of AI translations - The Verge’
🔗 linkblog: my thoughts on 'You Are Not Responsible for Your Own Online Privacy | WIRED'
- kudos:Some important—if disheartening—observations from Marwick. link to ‘You Are Not Responsible for Your Own Online Privacy | WIRED’
🔗 linkblog: my thoughts on 'Scammers Used ChatGPT to Unleash a Crypto Botnet on X | WIRED'
- kudos:Three cheers for ChatGPT or whatever. link to ‘Scammers Used ChatGPT to Unleash a Crypto Botnet on X | WIRED’
🔗 linkblog: my thoughts on 'Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect | WIRED'
- kudos:Good article on a worrying trend. It’s things like this that make me skeptical of arguments that generative AI could have real benefit when used properly. It’s not that I disagree—it’s that in the aggregate, I’m not sure the proper uses will outweigh the problems. link to ‘Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect | WIRED’
🔗 linkblog: my thoughts on 'An Iowa school district is using ChatGPT to decide which books to ban - The Verge'
- kudos:Even if ChatGPT could be trusted to do this task, “let’s remove books from libraries with less work” is a good example of how efficiency isn’t always a good thing. link to ‘An Iowa school district is using ChatGPT to decide which books to ban - The Verge’
- kudos:
I get why folks in ed compare ChatGPT to Wikipedia, but there are important differences. Wikipedia is entirely non-profit, lays bare its knowledge generation process, can be fixed on the fly, and can’t actively generate problematic content. It’s not just about reliability.
🔗 linkblog: my thoughts on 'AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian'
- kudos:So many important points in this piece. link to ‘AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian’