Below are posts associated with the “generative AI” tag.
thoughts on academic labor, digital labor, intellectual property, and generative AI
Thanks to this article from The Atlantic that I saw on Bluesky, I’ve been able to confirm something that I’ve long assumed to be the case: that my creative and scholarly work is being used to train generative AI tools. More specifically, I used the searchable database embedded in the article to search for myself and find that at least eight of my articles (plus two corrections) are available in the LibGen pirate library—which means that they were almost certainly used by Meta to train their Llama LLM.
policy and the prophetic voice: generative AI and deepfake nudes
This is a mess of a post blending thoughts on tech policy with religious ideas and lacking the kind of obvious throughline or structure that I’d like it to have. It’s also been in my head for a couple of weeks, and it’s time to release it into the world rather than wait for it to be something better. So, here it is:
I am frustrated with generative AI technology for many reasons, but one of the things at the top of that list is the knowledge that today’s kids are growing up in a world where it is possible—even likely—that their middle and high school experiences are going to involve someone using generative AI tools to produce deepfake nudes (or other non-consensual intimate imagery—NCII) of them. See, for example, this horrifying story from the New York Times last April.
🔗 linkblog: AI Slop Is a Brute Force Attack on the Algorithms That Control Reality
The “brute force” metaphor is helpful here, and the article also draws attention to the vulnerability of algorithmic media to generative AI brute forcing.
🔗 linkblog: OpenAI declares AI race “over” if training on copyrighted works isn’t fair use'
I believe that scraping the internet to profit off of generative AI is ethically problematic BUT I concede that it should be fair use BUT this is still a soulless and terrible argument.
🔗 linkblog: AI-Generated Slop Is Already In Your Public Library'
I get a lot of reading done through hoopla, but this kind of story is starting to sour me on the platform.
Jacques Ellul's technique and generative AI
Throughout my career, I’ve been a data-first researcher, and theory has always been one of my weak areas. This is not to say that I dismiss the importance of theory: I appreciate danah boyd and Kate Crawford’s critique of Chris Anderson’s “the numbers speak for themselves” in their 2012 paper Critical Questions for Big Data as much as I appreciate Catherine D’Ignazio and Lauren Klein’s similar critique in their book Data Feminism. It’s just that while I agree that theory is important, I’ve never been well-versed in it—except for the loose theoretical framework of sociocultural learning, multiple literacies, and social communities and spaces that I bring to much of my work (even that work that has gone beyond educational technology research.
🔗 linkblog: OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us'
Yeah, it’s really hard to have any sympathy here at all.
🔗 linkblog: AI Dungeon Master experiment exposes the vulnerability of Critical Role’s fandom'
Digital labor issues abound in the context of generative AI, but fan labor issues make me particularly angry.
🔗 linkblog: Corporations as Paperclip Maximizers: AI, Data, and the Future of Learning | Punya Mishra's Web'
There are some helpful thoughts in here. I think most of my concerns about generative AI are less about the technology itself and more about the corporate interest in and control of it.
🔗 linkblog: Evolution journal editors resign en masse'
More suckiness in the world of academic publishing.
🔗 linkblog: TCL TVs will use films made with generative AI to push targeted ads'
Well put:
TCL plans to get more into original content, fueled by a dystopian strategy that seems largely built around minimizing costs and pushing ads.
🔗 linkblog: I Went to the Premiere of the First Commercially Streaming AI-Generated Movies'
This is a solid article. I think the opening is reflective and that there’s an effort to be open minded (more than I would be). It’s also amazing to me, though, how explicitly the goal here seems to be profiting from a surveillance-supported content mill.
trapped between generative AI and student surveillance
We’re getting to the end of the semester here at the University of Kentucky, which is my traditional time to get overly introspective about grading. There’s a lot on my mind at the end of this semester, but one thing that has popped into my head tonight and that I think will be quick to write about is a dilemma that I’m facing this semester, when I’ve had faced more suspicions about student use of generative AI than in any previous semester. By way of context, my class policy is to: 1) discourage student use of generative AI, but 2) begrudgingly allow students to use it, but 3) require that they disclose its use.
🔗 linkblog: Bluesky, AI, and the battle for consent on the open web'
Lots of interesting reflections here.
🔗 linkblog: Inside Bluesky’s big growth surge'
Lots of interesting stuff in here, including the difficulty of content moderation, and yet another way that generative AI is screwing everything up.
🔗 linkblog: More academic publishers are doing AI deals'
I keep thinking about the similarity of exploitation of academic labor by publishers to the exploitation of everyone’s labor by AI companies, and stories like this just make it more clear.