Below are posts associated with the “generative AI” tag.
🔗 linkblog: Kentucky could be on the eve of a data center boom. But in Mason County details are sketchy. • Kentucky Lantern
Helpful reminder that data center problems are not just hypothetical—they’re potentially local.
🔗 linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
why I think labor, not copyright, is the foundational problem with AI scrapers
This morning on Bluesky, I saw some posts about a class action lawsuit against Anthropic for their use of pirated, copyrighted materials in training their generative AI models. One of the sources of these copyrighted materials was the LibGen database, which I took a peek at nearly six months ago to confirm what I was already sure to be true: that my scientific writing was also collected as training material by companies like Anthropic or Meta. I don’t love that big tech companies are profiting off of my work in this way, and I’m sympathetic to the authors who are taking legal action against Anthropic. However, as I’ve written repeatedly over the past few years (you can find some of those thoughts—and others—by scrolling through here, I don’t know that copyright is the right way of responding to this kind of abuse.
🔗 linkblog: UK government suggests deleting files to save water
I genuinely think it’s useful to remember that non-AI datacenters are also contributing to the climate crisis, but that doesn’t let AI off the hook. It’s like saying “sure, we’re spending far beyond our means, but have you considered that we’re already in debt?
🔗 linkblog: Reddit will block the Internet Archive
This sucks. I don’t have a lot of sympathy for Reddit here, which has shown over the past few years a dedicated interest in monetizing its userbase.
🔗 linkblog: Phénomène mondial sur les réseaux sociaux, que sont les Italian Brainrots, ces personnages absurdes générés par IA ?
Ma carrière se divise entre une valorisation des pratiques numériques perçues comme n’ayant pas d’importance et une critique des technologies qui permettent ces pratiques. J’avoue que je me sens vraiment divisé pour cet exemple !
🔗 linkblog: Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online
Look, nothing really new in here (Clearview should have made parents rethink the same ages ago, etc.), but yes, AI should get parents to be a hell of a lot more careful with posting pictures of kids.
🔗 linkblog: Sex is getting scrubbed from the internet, but a billionaire can sell you AI nudes
I hadn’t thought about these two trends (cracking down on adult content, and Grok being Grok) being in tension with each other, and I appreciate what this article does to make that clear.
🔗 linkblog: Grok's 'Spicy' Mode Makes NSFW Celebrity Deepfakes of Women (But Not Men)
Unsurprising but disappointing.
🔗 linkblog: AI industry horrified to face largest copyright class action ever certified
Again, I’m not sure copyright is the way to go in fighting immoral generative AI companies (that the ALA and EFF are on Anthropic’s side seems important to me), but “we have to be able to do this to be successful” still strikes me as such a hollow, self-serving argument.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
There are, of course, social benefits to open sourcing powerful tools like these ones. However, I’m reminded of “open source” Android, which is a deliberate business decision that benefits Google—and of how many NCII-generating tools are based on open weight/open source models. gift link
🔗 linkblog: Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes
I’m glad that someone is doing this white hat work, but I hate that we live in a world where someone has to.
🔗 linkblog: Un adolescent espagnol accusé de créer des images dénudées de ses camarades de classe par intelligence artificielle et de les vendre
Quel monde pourri qui attend nos enfants.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
🔗 linkblog: ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It
You know, I skipped over this story when it come out in a couple of other outlets, but seeing the headline again here got me thinking about how good/scary of an example this is of LLMs shaping (rather than reflecting) reality.
🔗 linkblog: The Em Dash Responds to the AI Allegations
As a committed em dash user, this has been bugging me since I heard about it.