BA in French Teaching; PhD in Educational Technology; Associate Professor of ICT at University of Kentucky School of Information Science. My CV is available here, you can browse my research here, and my Google Scholar profile here
Supported by digital methods, my research focuses on online social spaces, community practices within these spaces, and the influence of the platforms where they are found. My research is interdisciplinary, exploring spaces associated with teaching and learning, Mormonism, the far right, or even combinations of these themes.
You can subscribe to this content through this RSS feed or this Mastodon account. You can also subscribe to all of the content on this website through this RSS feed, this Bluesky account, or this newsletter.
I sometimes write in French! To only see the French content (which is also available below, alongside English content), please click on [fr] in the site header.
🔗 linkblog: New executive order puts all grants under political control
Here’s Jacques Ellul on state funding of research:
The state demands that anything scientific enter into the line of “normal” development, not only for the stake of the public interest but also because of its will to power. We have previously noted that this will to power has found in technique an extraordinary means of expression. The state quickly comes to demand that technique keep its promises and be an effective servant of state power. Everything not of direct interest to this drive for power appears valueless.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
« en présentiel » et d'autres phrases à apprendre pour une traduction de CV
Ce weekend, quelqu’un m’a recommandé une épisode de la série « La science et ses mauvaises consciences » , qui fait partie de l’émission Avec philosophie sur France Culture. J’ai décidé de télécharger toute la série, et en écoutant la première épisode, j’ai entendu une des interlocutrices se servir de la phrase « en présentiel », après quoi elle s’est excusée pour avoir prononcé un anglicisme.
Cela m’a gêné un peu, car ça faisait quelques jours que je travaillais sur une version de mon CV en français, une partie importante de mes efforts d’avoir un site web plus ou moins bilingue. Comme je suis professeur, l’enseignement fait évidemment partie de mon CV. Je fais beaucoup d’enseignement en ligne, et je m’étais donc servi de la phrase « en présentiel » pour distinguer les autres cours qui se déroulent dans des salles de classe. Est-ce que j’avais fait une faute ?
🔗 linkblog: The Em Dash Responds to the AI Allegations
As a committed em dash user, this has been bugging me since I heard about it.
🔗 linkblog: 'Arbres', 'noix', 'tout le monde sait'... Ce vocabulaire détourné par des internautes pour diffuser des idées d'extrême droite sur les réseaux sociaux
Le décryptage des communautés en ligne a toujour été un genre important des recherches, mais c’est surtout important dans des cas comme celui-ci.
🔗 linkblog: 'Doing their own thing': KY legislators hear about the state of AI use and guidance in schools
I can see the value in some state guidelines, but I suspect they would be more permissive than what I want for my classroom. I hope I’ll still have the chance to establish restrictions as I see fit.
🔗 linkblog: Kentucky Republican lawmaker questions gender and women’s studies course at UK • Kentucky Lantern
So far, we’ve been told that the General Assembly’s war on DEI doesn’t affect our classroom teaching. Is this legislative bluster, or do we have worries on the horizon?
📚 bookblog: Mormons, Musical Theater, and Belonging in America (❤️❤️❤️❤️🖤)
I mostly skimmed this book, and I would have some quibbles with it if I got more into the details, but I found it really good. Musical theater is far, faaaar outside of my research interests, but this book articulates a fascinating “theology of voice” within Mormonism that will be helpful as I look to write something on Ellul and Mormon Studies.
🔗 linkblog: Grok searches for Elon Musk’s opinion before answering tough questions
Look, I really will stop posting about Grok and epistemology, but the news stories keep coming.
🔗 linkblog: Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X
Okay, really don’t want to spend any more time writing about Grok, but let’s talk about this passage:
“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.
To return to my thoughts on AI and epistemology, I don’t think having a PhD is (or should be) a benchmark for content knowledge. Rather, I think it demonstrates (or should demonstrate) a commitment to the process of knowledge production, and LLMs cannot truly compete with humans there.
🔗 linkblog: Trump Seeks to Cut Basic Scientific Research by Roughly One-Third, Report Shows
Reading this through an Ellulian lens is interesting. In the 1950s, he was expressing concern about the valuing of (applied) technique over (basic) science. In this article, though, it’s clear how often that basic science is still described and defended in applied/technical terms. pushing the boundaries of knowledge seems to only be valuable if it “sow[s] practical spinoffs and breakthroughs” or helps the U.S. in its geopolitical competition.
🔗 linkblog: Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Yesterday, I wrote my thoughts on how Grok’s “Nazi meltdown” helps illustrate some of my concerns about AI and epistemology.
This coverage of Grok’s latest demo only reinforces that—Musk’s tinkering with the LLM to get the results he wants is at odds with his states naïve epistemology that an LLM can be “maximally truth-seeking,” as though there is a self-evident truth that an LLM can deliver in a straightforward way (that is, without all that mucking about behind the scenes).