Below are posts associated with the “link” type.
🔗 linkblog: SCOOP: Substack sent a push alert promoting a Nazi blog
You don’t have to use Substack to have a newsletter.
🔗 linkblog: Un adolescent espagnol accusé de créer des images dénudées de ses camarades de classe par intelligence artificielle et de les vendre
Quel monde pourri qui attend nos enfants.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Google’s AI Is Destroying Search, the Internet, and Your Brain
“Traffic apocalypse” is a scary idea—not only for the threat it poses to smaller websites but also for the way it could further cement the influence of a few big companies in shaping the world.
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
🔗 linkblog: ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It
You know, I skipped over this story when it come out in a couple of other outlets, but seeing the headline again here got me thinking about how good/scary of an example this is of LLMs shaping (rather than reflecting) reality.
🔗 linkblog: This ‘violently racist’ hacker claims to be the source of The New York Times’ Mamdani scoop
Some wild details in here—all of which seem more important to me than the application details.
🔗 linkblog: Inside a Gaza hospital: A British surgeon on what he's witnessing firsthand
Some horrifying details in here.
🔗 linkblog: Mark Zuckerberg Is Expanding His Secretive Hawaii Compound. Part of It Sits Atop a Burial Ground
Personally, I’m not sure why anyone needs to be this rich.
🔗 linkblog: La liberté de la presse toujours plus attaquée par Donald Trump
Je trouve utile de lire les médias étrangers, pour savoir comment on réagit à notre folie.
🔗 linkblog: La Suisse, un pays peuplé d'irréductibles Helvètes qui résistent encore et toujours à Amazon
Vive la Suisse et sa résistance envers Amazon.
🔗 linkblog: The Astronomer CEO's Coldplay Concert Fiasco Is Emblematic of Our Social Media Surveillance Dystopia
Good article. I’m not here to defend CEOs who have affairs with executives in their companies, but the tech ecosystem that allowed for this will do more harm to everyday people than it will ever hold CEOs to account.
🔗 linkblog: Will AI end cheap flights? Critics attack Delta’s “predatory” AI pricing.
Yes, but AI will also save us time writing emails, so this seems like a fair tradeoff.
🔗 linkblog: The Em Dash Responds to the AI Allegations
As a committed em dash user, this has been bugging me since I heard about it.
🔗 linkblog: 'Arbres', 'noix', 'tout le monde sait'... Ce vocabulaire détourné par des internautes pour diffuser des idées d'extrême droite sur les réseaux sociaux
Le décryptage des communautés en ligne a toujour été un genre important des recherches, mais c’est surtout important dans des cas comme celui-ci.
🔗 linkblog: 'Doing their own thing': KY legislators hear about the state of AI use and guidance in schools
I can see the value in some state guidelines, but I suspect they would be more permissive than what I want for my classroom. I hope I’ll still have the chance to establish restrictions as I see fit.
🔗 linkblog: I Tried Grok’s Built-In Anime Companion and It Called Me a Twat
Musk leans into the bro in tech bro.
🔗 linkblog: Kentucky Republican lawmaker questions gender and women’s studies course at UK • Kentucky Lantern
So far, we’ve been told that the General Assembly’s war on DEI doesn’t affect our classroom teaching. Is this legislative bluster, or do we have worries on the horizon?
🔗 linkblog: Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People
Gonna keep posting (almost) every article I read on NCII and generative AI.
🔗 linkblog: a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise
This line (from a study quoted in the article) stood out:
The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes.
Open sourcing generative AI solves some problems but creates others.
🔗 linkblog: AI 'Nudify' Websites Are Raking in Millions of Dollars
On, look, it’s two of my least favorite things about generative AI (NCII and raking in money without concern for ethics) IN THE SAME STORY.
🔗 linkblog: Mike Lee Can’t Stop Throwing Social Media Grenades. His Church Isn’t Happy.
Good read and worth bookmarking for later.
🔗 linkblog: Grok searches for Elon Musk’s opinion before answering tough questions
Look, I really will stop posting about Grok and epistemology, but the news stories keep coming.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge.
🔗 linkblog: Trump Mobile Keeps Charging My Credit Card And I Have No Idea Why
Love—don’t love—how all the constitutional and democratic dangers of our time are so closely married to low-grade scammishness.
🔗 linkblog: Trump Seeks to Cut Basic Scientific Research by Roughly One-Third, Report Shows
Reading this through an Ellulian lens is interesting. In the 1950s, he was expressing concern about the valuing of (applied) technique over (basic) science. In this article, though, it’s clear how often that basic science is still described and defended in applied/technical terms. pushing the boundaries of knowledge seems to only be valuable if it “sow[s] practical spinoffs and breakthroughs” or helps the U.S. in its geopolitical competition.
Gift link.
🔗 linkblog: A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet
Surely this is a reasonable price to pay for the Nazi-praising Grok to “discover new physics” within the next year, as Elon promised last night.
This kind of thing is why I hate “the genie is out of the bottle” arguments. I can’t help but hear them as “yes, people are going to create more CSAM, but all we can do is instead teach people to use these tools more responsibly.
🔗 linkblog: Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Yesterday, I wrote my thoughts on how Grok’s “Nazi meltdown” helps illustrate some of my concerns about AI and epistemology.
This coverage of Grok’s latest demo only reinforces that—Musk’s tinkering with the LLM to get the results he wants is at odds with his states naïve epistemology that an LLM can be “maximally truth-seeking,” as though there is a self-evident truth that an LLM can deliver in a straightforward way (that is, without all that mucking about behind the scenes).
🔗 linkblog: The New York Times Runs Interference For A Racist To Manufacture A Fake Scandal About Zohran Mamdani
There’s a genre of news story that I actively avoid following the discourse on but end up reading about once Mike Masnick writes on it, and then I get angry with everyone else. This fits nicely in that genre.
🔗 linkblog: What is AGI? Nobody agrees, and it’s tearing Microsoft and OpenAI apart.
Karen Hao’s Empire of AI really emphasized for me how much stock is being put in AGI—especially as a motivator for AI companies. I am fine wirh concepts being hard to define, but I do think things get tricky when you can’t articulate how you’ll know when you’ve met the goal that serves as your raison d’être.
🔗 linkblog: Grok praises Hitler, gives credit to Musk for removing 'woke filters'
Disgusting and deliberate.