Below are posts associated with the “link” type.
🔗 linkblog: Pluralistic: Become unoptimizable (20 Aug 2025) – Pluralistic: Daily links from Cory Doctorow
Some Ellulian vibes in here.
🔗 linkblog: How Tea’s Founder Convinced Millions of Women to Spill Their Secrets, Then Exposed Them to the World
What a wild, depressing story. I feel like I ought to use this to teach the concept of platforms to my students—it neatly sums up the intervention in normal human activity by someone who thinks they have a buck to make.
🔗 linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
🔗 linkblog: What's behind the Trump administration's immigration memes?
There’s always been a dark side to internet culture, but I don’t think it was naïve in my earlier work to argue for recognizing its value. Yet, it’s important as a scholar to call out the ugliness that’s happening here.
🔗 linkblog: The Fairphone (Gen. 6) Is the Antidote to Yearly Phone Upgrades
Been thinking for years that I should own a Fairphone one day.
🔗 linkblog: Hommage à Mix & Remix à la station de métro lausannoise de Bessières
Je ne connais pas hyper bien Mix & Remix, mais j’ai quelques livres qu’il a illustré, et je crois me souvenir d’un mécanicien à Renens dont il a illustré le logo.
🔗 linkblog: Google Scholar Is Doomed
Oof, hadn’t thought of this, but as much as I’d like to further reduce Google dependence, this would really hurt.
🔗 linkblog: UK government suggests deleting files to save water
I genuinely think it’s useful to remember that non-AI datacenters are also contributing to the climate crisis, but that doesn’t let AI off the hook. It’s like saying “sure, we’re spending far beyond our means, but have you considered that we’re already in debt?
🔗 linkblog: The Trump Administration Is Using Memes to Turn Mass Deportation Into One Big Joke
Bookmarking this so I can point to it if anyone asks why I’ve shifted my research from ed tech to right-wing Mormonism.
🔗 linkblog: Reddit will block the Internet Archive
This sucks. I don’t have a lot of sympathy for Reddit here, which has shown over the past few years a dedicated interest in monetizing its userbase.
🔗 linkblog: Phénomène mondial sur les réseaux sociaux, que sont les Italian Brainrots, ces personnages absurdes générés par IA ?
Ma carrière se divise entre une valorisation des pratiques numériques perçues comme n’ayant pas d’importance et une critique des technologies qui permettent ces pratiques. J’avoue que je me sens vraiment divisé pour cet exemple !
🔗 linkblog: Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online
Look, nothing really new in here (Clearview should have made parents rethink the same ages ago, etc.), but yes, AI should get parents to be a hell of a lot more careful with posting pictures of kids.
🔗 linkblog: Sex is getting scrubbed from the internet, but a billionaire can sell you AI nudes
I hadn’t thought about these two trends (cracking down on adult content, and Grok being Grok) being in tension with each other, and I appreciate what this article does to make that clear.
🔗 linkblog: Grok's 'Spicy' Mode Makes NSFW Celebrity Deepfakes of Women (But Not Men)
Unsurprising but disappointing.
🔗 linkblog: Defense Secretary Pete Hegseth reposts video of pastors saying women shouldn't vote
Don’t even know what to comment here. How is this the world we’re living in?
🔗 linkblog: AI industry horrified to face largest copyright class action ever certified
Again, I’m not sure copyright is the way to go in fighting immoral generative AI companies (that the ALA and EFF are on Anthropic’s side seems important to me), but “we have to be able to do this to be successful” still strikes me as such a hollow, self-serving argument.
🔗 linkblog: New executive order puts all grants under political control
Here’s Jacques Ellul on state funding of research:
The state demands that anything scientific enter into the line of “normal” development, not only for the stake of the public interest but also because of its will to power. We have previously noted that this will to power has found in technique an extraordinary means of expression. The state quickly comes to demand that technique keep its promises and be an effective servant of state power.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: OpenAI to Open-Source Some of the A.I. Systems Behind ChatGPT
There are, of course, social benefits to open sourcing powerful tools like these ones. However, I’m reminded of “open source” Android, which is a deliberate business decision that benefits Google—and of how many NCII-generating tools are based on open weight/open source models. gift link
🔗 linkblog: Grok’s ‘spicy’ video setting instantly made me Taylor Swift nude deepfakes
I’m glad that someone is doing this white hat work, but I hate that we live in a world where someone has to.
🔗 linkblog: Dumping Google’s enshittified search for Kagi
This article is one more push in the direction of my finally subscribing to Kagi.
🔗 linkblog: Statement Regarding 80 Years Since the First Use of Nuclear Weapons | News | Community of Christ
Glad to belong to a church that takes positions on moral issues like this one.
🔗 linkblog: Quand la presse glorifiait la bombe atomique après l'attaque sur Hiroshima
Je connaissais pas cette partie de l’histoire.
🔗 linkblog: Substack’s Algorithm Accidentally Reveals What We Already Knew: It’s The Nazi Bar Now
Not impressed with Substack, and Masnick does a good job of explaining why.
🔗 linkblog: SCOOP: Substack sent a push alert promoting a Nazi blog
You don’t have to use Substack to have a newsletter.
🔗 linkblog: Un adolescent espagnol accusé de créer des images dénudées de ses camarades de classe par intelligence artificielle et de les vendre
Quel monde pourri qui attend nos enfants.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Google’s AI Is Destroying Search, the Internet, and Your Brain
“Traffic apocalypse” is a scary idea—not only for the threat it poses to smaller websites but also for the way it could further cement the influence of a few big companies in shaping the world.
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
🔗 linkblog: ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It
You know, I skipped over this story when it come out in a couple of other outlets, but seeing the headline again here got me thinking about how good/scary of an example this is of LLMs shaping (rather than reflecting) reality.