Below are posts associated with the “generative AI” tag.
🔗 linkblog: Best printer 2025: just buy a Brother laser printer, the winner is clear, middle finger in the air
I didn’t need to read a printer recommendation article today, but I’m so glad I did. The rage about the world we live in is great.
🔗 linkblog: 'I Want to Make You Immortal:' How One Woman Confronted Her Deepfakes Harasser
Studio Ghibli pictures are neat (legitimately! it’s one of the first generative AI things that’s tempted me!), but these deepfakes are the price we pay for them, and I think that’s too high a price.
🔗 linkblog: How crawlers impact the operations of the Wikimedia projects
I think this is a good example of why digital labor is a particularly salient critique of generative AI. Yes, Wikimedia content is licensed, but not as strictly as copyrighted works. Yet, ripping off of their work is arguably worse than grabbing some copyrighted works.
🔗 linkblog: OpenAI's Studio Ghibli meme factory is an insult to art itself
I skipped over this article the first few times I saw it, but I think there’s some good stuff in here. Is defying Ghibli the point?
🔗 linkblog: OpenAI's viral Studio Ghibli moment highlights AI copyright concerns | TechCrunch
Generative AI products make me mad, I don’t like them, and I’m not going to defend them. That said, if this gets framed as a copyright problem, is there any way to give Studio Ghibli (or Pixar or the Seuss estate) power to cry foul here that doesn’t also shut down fan art, parodies, and the like? I’m skeptical, and that’s why I think “labor” is the more productive—if more legally ambiguous—framing here.
thoughts on academic labor, digital labor, intellectual property, and generative AI
Thanks to this article from The Atlantic that I saw on Bluesky, I’ve been able to confirm something that I’ve long assumed to be the case: that my creative and scholarly work is being used to train generative AI tools. More specifically, I used the searchable database embedded in the article to search for myself and find that at least eight of my articles (plus two corrections) are available in the LibGen pirate library—which means that they were almost certainly used by Meta to train their Llama LLM.
policy and the prophetic voice: generative AI and deepfake nudes
This is a mess of a post blending thoughts on tech policy with religious ideas and lacking the kind of obvious throughline or structure that I’d like it to have. It’s also been in my head for a couple of weeks, and it’s time to release it into the world rather than wait for it to be something better. So, here it is:
I am frustrated with generative AI technology for many reasons, but one of the things at the top of that list is the knowledge that today’s kids are growing up in a world where it is possible—even likely—that their middle and high school experiences are going to involve someone using generative AI tools to produce deepfake nudes (or other non-consensual intimate imagery—NCII) of them.
🔗 linkblog: AI Slop Is a Brute Force Attack on the Algorithms That Control Reality
The “brute force” metaphor is helpful here, and the article also draws attention to the vulnerability of algorithmic media to generative AI brute forcing.
🔗 linkblog: OpenAI declares AI race “over” if training on copyrighted works isn’t fair use'
I believe that scraping the internet to profit off of generative AI is ethically problematic BUT I concede that it should be fair use BUT this is still a soulless and terrible argument.
🔗 linkblog: AI-Generated Slop Is Already In Your Public Library'
I get a lot of reading done through hoopla, but this kind of story is starting to sour me on the platform.
Jacques Ellul's technique and generative AI
Throughout my career, I’ve been a data-first researcher, and theory has always been one of my weak areas. This is not to say that I dismiss the importance of theory: I appreciate danah boyd and Kate Crawford’s critique of Chris Anderson’s “the numbers speak for themselves” in their 2012 paper Critical Questions for Big Data as much as I appreciate Catherine D’Ignazio and Lauren Klein’s similar critique in their book Data Feminism.
🔗 linkblog: OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us'
Yeah, it’s really hard to have any sympathy here at all.
🔗 linkblog: AI Dungeon Master experiment exposes the vulnerability of Critical Role’s fandom'
Digital labor issues abound in the context of generative AI, but fan labor issues make me particularly angry.
🔗 linkblog: Corporations as Paperclip Maximizers: AI, Data, and the Future of Learning | Punya Mishra's Web'
There are some helpful thoughts in here. I think most of my concerns about generative AI are less about the technology itself and more about the corporate interest in and control of it.
🔗 linkblog: Evolution journal editors resign en masse'
More suckiness in the world of academic publishing.
🔗 linkblog: TCL TVs will use films made with generative AI to push targeted ads'
Well put:
TCL plans to get more into original content, fueled by a dystopian strategy that seems largely built around minimizing costs and pushing ads.
🔗 linkblog: I Went to the Premiere of the First Commercially Streaming AI-Generated Movies'
This is a solid article. I think the opening is reflective and that there’s an effort to be open minded (more than I would be). It’s also amazing to me, though, how explicitly the goal here seems to be profiting from a surveillance-supported content mill.
trapped between generative AI and student surveillance
We’re getting to the end of the semester here at the University of Kentucky, which is my traditional time to get overly introspective about grading. There’s a lot on my mind at the end of this semester, but one thing that has popped into my head tonight and that I think will be quick to write about is a dilemma that I’m facing this semester, when I’ve had faced more suspicions about student use of generative AI than in any previous semester.
🔗 linkblog: Bluesky, AI, and the battle for consent on the open web'
Lots of interesting reflections here.
🔗 linkblog: Inside Bluesky’s big growth surge'
Lots of interesting stuff in here, including the difficulty of content moderation, and yet another way that generative AI is screwing everything up.
🔗 linkblog: More academic publishers are doing AI deals'
I keep thinking about the similarity of exploitation of academic labor by publishers to the exploitation of everyone’s labor by AI companies, and stories like this just make it more clear.
🔗 linkblog: How Memphis became a battleground over Elon Musk’s xAI supercomputer'
Who benefits from AI? Who doesn’t?
🔗 linkblog: AI Checkers Forcing Kids To Write Like A Robot To Avoid Being Called A Robot'
I am way more pessimistic about AI than Masnick is, but we agree on this sort of thing. Algorithmic surveillance is no more appropriate in response to AI concerns than it is to cheating concerns.
generative AI and the Honorable Harvest
I come from settler colonial stock and, more specifically, from a religious tradition that was (and still is!) pretty keen on imposing a particular identity on Indigenous peoples. I am the kind of person who really ought to be reading more Indigenous perspectives, but I’m also cautious about promoting those perspectives in my writing, lest I rely on a superficial, misguided understanding and then pat myself on the back for the great job I’m doing.
🔗 linkblog: CAPTCHAs Becoming Useless as AI Gets Smarter, Scientists Warn'
One thing this article misses is how often CAPTCHA has been used to train AI. It’s always been playing both sides against each other.
🔗 linkblog: Ex-Google CEO says successful AI startups can steal IP and hire lawyers to ‘clean up the mess’'
What reckless hubris. As I wrote earlier today, I’m in favor of more liberal IP law, but not so that businesses can swallow up content to profit from it.