BA in French Teaching; PhD in Educational Technology; Associate Professor of ICT at University of Kentucky School of Information Science. My CV is available here, you can browse my research here, and my Google Scholar profile here
Supported by digital methods, my research focuses on online social spaces, community practices within these spaces, and the influence of the platforms where they are found. My research is interdisciplinary, exploring spaces associated with teaching and learning, Mormonism, the far right, or even combinations of these themes.
You can subscribe to this content through this RSS feed or this Mastodon account. You can also subscribe to all of the content on this website through this RSS feed, this Bluesky account, or this newsletter.
I sometimes write in French! To only see the French content (which is also available below, alongside English content), please click on [fr] in the site header.
une série de France Culture sur Jacques Ellul
Merci à Matoo, qui a vu combien j’écrivais sur Jacques Ellul sur ce site et qui m’a donc recommandé la petite série de cinq épisodes « Avoir raison… avec Jacques Ellul », qui est sorti il y a quelques semaines sur France Culture. J’ai écouté la première épisode ce matin en faisant de petites préparations pour mon premier jour d’enseignement pour cette année scolaire, et je le trouve déjà très utile.
J’ai déjà lu trois livres par Ellul et je suis en train de lire deux autres (bon, en théorie — j’avoue que ça va lentement). Ce week-end, je vais recevoir quelques nouveaux livres d’Ellul que mon beau-frère a acheté à la librairie new-yorkaise magnifique Albertine, qui est soutenue par l’ambassade française aux États-Unis. Mon beau-frère va à New York tous les étés et me cherchent toujours quelques bouquins francophones, et ça fait qu’en ce moment, j’aurai bientôt beaucoup plus à lire d’Ellul. C’est la première fois dans ma vie que je m’engage à ce niveau avec l’œuvre d’un seul écrivain académique, et je trouve qu’avoir des résumés comme celui de France Culture m’aide beaucoup à situer ce que je lis en un moment particulier dans l’ensemble de sa pensée.
🔗 linkblog: The NSF just cut K-12 STEM Education research going forward
Appreciate Josh’s eye for detail here.
defining platforms—and religion as platforms
I subscribe to the “Religion Watch” newsletter out of Baylor University but usually don’t do much more than skim it. The first entry in the June edition, though, immediately stood out to me for this excerpt:
Paul Seabright’s recent book, The Divine Economy: How Religions Compete for Wealth, Power, and People (Princeton University Press, $35), is unique for its comprehensive treatment of the religious past and present as well as its novel use of the concept of “platforms” in explaining the economy of religion.
🔗 linkblog: How Tea’s Founder Convinced Millions of Women to Spill Their Secrets, Then Exposed Them to the World
What a wild, depressing story. I feel like I ought to use this to teach the concept of platforms to my students—it neatly sums up the intervention in normal human activity by someone who thinks they have a buck to make.
🔗 linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
insisting that pencils are technology is not (necessarily) a wiseass move
Thanks to the magic of Bluesky, I came across Paul Musgrave’s essay “Classroom Technology Was a Mistake,” with the subtitle “Hopes that AI will improve higher ed need to reckon with the dashed hopes of the past.” As a whole, I appreciate the essay—I’m sympathetic to Musgrave’s argument, and I couldn’t agree with the subtitle more if I tried. I want to do one of those things, though, where one academic spends too much time quibbling with a minor part of another academic’s argument. In particular, I want to take issue with this part of Musgrave’s essay:
🔗 linkblog: What's behind the Trump administration's immigration memes?
There’s always been a dark side to internet culture, but I don’t think it was naïve in my earlier work to argue for recognizing its value. Yet, it’s important as a scholar to call out the ugliness that’s happening here.
why I think labor, not copyright, is the foundational problem with AI scrapers
This morning on Bluesky, I saw some posts about a class action lawsuit against Anthropic for their use of pirated, copyrighted materials in training their generative AI models. One of the sources of these copyrighted materials was the LibGen database, which I took a peek at nearly six months ago to confirm what I was already sure to be true: that my scientific writing was also collected as training material by companies like Anthropic or Meta. I don’t love that big tech companies are profiting off of my work in this way, and I’m sympathetic to the authors who are taking legal action against Anthropic. However, as I’ve written repeatedly over the past few years (you can find some of those thoughts—and others—by scrolling through here, I don’t know that copyright is the right way of responding to this kind of abuse.
🔗 linkblog: Google Scholar Is Doomed
Oof, hadn’t thought of this, but as much as I’d like to further reduce Google dependence, this would really hurt.
🔗 linkblog: The Trump Administration Is Using Memes to Turn Mass Deportation Into One Big Joke
Bookmarking this so I can point to it if anyone asks why I’ve shifted my research from ed tech to right-wing Mormonism.
🔗 linkblog: Phénomène mondial sur les réseaux sociaux, que sont les Italian Brainrots, ces personnages absurdes générés par IA ?
Ma carrière se divise entre une valorisation des pratiques numériques perçues comme n’ayant pas d’importance et une critique des technologies qui permettent ces pratiques. J’avoue que je me sens vraiment divisé pour cet exemple !
🔗 linkblog: New executive order puts all grants under political control
Here’s Jacques Ellul on state funding of research:
The state demands that anything scientific enter into the line of “normal” development, not only for the stake of the public interest but also because of its will to power. We have previously noted that this will to power has found in technique an extraordinary means of expression. The state quickly comes to demand that technique keep its promises and be an effective servant of state power. Everything not of direct interest to this drive for power appears valueless.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
À mon avis, l’expertise « au niveau doctorat » est surtout lié au processus de connaître et non à la connaissance elle-même, et l’IA générative ne respecte pas du tout ce processus.
🔗 linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naïve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
🔗 linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
🔗 linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
🔗 linkblog: Trump unveils his plan to put AI in everything
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la présence des IA dans les écoles, mais je trouve aussi gênante la pénalisation à tort des étudiants.
« en présentiel » et d'autres phrases à apprendre pour une traduction de CV
Ce weekend, quelqu’un m’a recommandé une épisode de la série « La science et ses mauvaises consciences » , qui fait partie de l’émission Avec philosophie sur France Culture. J’ai décidé de télécharger toute la série, et en écoutant la première épisode, j’ai entendu une des interlocutrices se servir de la phrase « en présentiel », après quoi elle s’est excusée pour avoir prononcé un anglicisme.
Cela m’a gêné un peu, car ça faisait quelques jours que je travaillais sur une version de mon CV en français, une partie importante de mes efforts d’avoir un site web plus ou moins bilingue. Comme je suis professeur, l’enseignement fait évidemment partie de mon CV. Je fais beaucoup d’enseignement en ligne, et je m’étais donc servi de la phrase « en présentiel » pour distinguer les autres cours qui se déroulent dans des salles de classe. Est-ce que j’avais fait une faute ?