BA in French Teaching; PhD in Educational Technology; Associate Professor of ICT at University of Kentucky School of Information Science. My CV is available here, you can browse my research here, and my Google Scholar profile here
Supported by digital methods, my research focuses on online social spaces, community practices within these spaces, and the influence of the platforms where they are found. My research is interdisciplinary, exploring spaces associated with teaching and learning, Mormonism, the far right, or even combinations of these themes.
You can subscribe to this content through this RSS feed or this Mastodon account. You can also subscribe to all of the content on this website through this RSS feed, this Bluesky account, or this newsletter.
I sometimes write in French! To only see the French content (which is also available below, alongside English content), please click on [fr] in the site header.
đź”— linkblog: An AI divide is growing in schools. This camp wants to level the playing field
Closing digital divides is good, and increasing diversity in tech fields as good, but I’ve been complaining for years about computer science ed that we stop at the nobility of those goals and don’t ask ourselves about the deeper motivations behind those initiatives. So it is with AI: A more diverse field more available to all is better than what we have, but we also have to ask whether AI education is actually a social good.
insisting that pencils are technology is not (necessarily) a wiseass move
Thanks to the magic of Bluesky, I came across Paul Musgrave’s essay “Classroom Technology Was a Mistake,” with the subtitle “Hopes that AI will improve higher ed need to reckon with the dashed hopes of the past.” As a whole, I appreciate the essay—I’m sympathetic to Musgrave’s argument, and I couldn’t agree with the subtitle more if I tried. I want to do one of those things, though, where one academic spends too much time quibbling with a minor part of another academic’s argument. In particular, I want to take issue with this part of Musgrave’s essay:
đź”— linkblog: What's behind the Trump administration's immigration memes?
There’s always been a dark side to internet culture, but I don’t think it was naĂŻve in my earlier work to argue for recognizing its value. Yet, it’s important as a scholar to call out the ugliness that’s happening here.
why I think labor, not copyright, is the foundational problem with AI scrapers
This morning on Bluesky, I saw some posts about a class action lawsuit against Anthropic for their use of pirated, copyrighted materials in training their generative AI models. One of the sources of these copyrighted materials was the LibGen database, which I took a peek at nearly six months ago to confirm what I was already sure to be true: that my scientific writing was also collected as training material by companies like Anthropic or Meta. I don’t love that big tech companies are profiting off of my work in this way, and I’m sympathetic to the authors who are taking legal action against Anthropic. However, as I’ve written repeatedly over the past few years (you can find some of those thoughts—and others—by scrolling through here, I don’t know that copyright is the right way of responding to this kind of abuse.
đź”— linkblog: Google Scholar Is Doomed
Oof, hadn’t thought of this, but as much as I’d like to further reduce Google dependence, this would really hurt.
đź”— linkblog: The Trump Administration Is Using Memes to Turn Mass Deportation Into One Big Joke
Bookmarking this so I can point to it if anyone asks why I’ve shifted my research from ed tech to right-wing Mormonism.
🔗 linkblog: Phénomène mondial sur les réseaux sociaux, que sont les Italian Brainrots, ces personnages absurdes générés par IA ?
Ma carrière se divise entre une valorisation des pratiques numĂ©riques perçues comme n’ayant pas d’importance et une critique des technologies qui permettent ces pratiques. J’avoue que je me sens vraiment divisĂ© pour cet exemple !
đź”— linkblog: New executive order puts all grants under political control
Here’s Jacques Ellul on state funding of research:
The state demands that anything scientific enter into the line of “normal” development, not only for the stake of the public interest but also because of its will to power. We have previously noted that this will to power has found in technique an extraordinary means of expression. The state quickly comes to demand that technique keep its promises and be an effective servant of state power. Everything not of direct interest to this drive for power appears valueless.
🔗 linkblog: Nouveau modèle d'IA: ChatGPT-5: «C’est comme parler à un expert de niveau doctorat»
Ă€ mon avis, l’expertise « au niveau doctorat » est surtout liĂ© au processus de connaĂ®tre et non Ă la connaissance elle-mĂŞme, et l’IA gĂ©nĂ©rative ne respecte pas du tout ce processus.
đź”— linkblog: Google would like you to study with Gemini instead of cheat with it
This seems performative to me, and this paragraph gets at why I think so:
AI companies are increasingly pushing into education — perhaps in part to try and fight the reputation that AI tools have acquired that they help students cheat. Features like Gemini’s guided learning mode and ChatGPT’s similar study mode, which was announced last week, could theoretically help with actual learning, but the question is whether students will want to use these modes instead of just using the AI chatbots for easy answers.
🔗 linkblog: Is Including Hidden AI Prompts In Academic Papers Gaming The Peer Review System — Or Keeping It Honest?
Kind of hate that we have to ask the question in this headline!
I’ve been (link)blogging recently about needing to talk about epistemology when we talk about generative AI. I know that in at least one case, I’ve evoked the generation of scientific knowledge as a counterexample to the “just the facts, ma’am” naĂŻve epistemology promoted by AI and its supporters. To use generative AI to review papers—and to try to get around peer review—feels particularly dangerous to me.
đź”— linkblog: The White House orders tech companies to make AI bigoted again
Quick question about this passage:
Trump … signed an executive order titled “Preventing Woke AI in the Federal Government,” directing government agencies “not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
So, how does one determine what is true or accurate? Once again, we need to talk about epistemology when we talk about generative AI.
đź”— linkblog: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
Dammit, am I going to have to stop using Canvas?
đź”— linkblog: Trump unveils his plan to put AI in everythingÂ
This emphasis on “objective truth” further underscores the need to talk epistemology when we talk AI.
🔗 linkblog: Une lycéenne accusée d'avoir triché avec une IA au baccalauréat de philosophie obtient finalement son diplôme
Je n’aime pas du tout la prĂ©sence des IA dans les Ă©coles, mais je trouve aussi gĂŞnante la pĂ©nalisation Ă tort des Ă©tudiants.
« en présentiel » et d'autres phrases à apprendre pour une traduction de CV
Ce weekend, quelqu’un m’a recommandĂ© une Ă©pisode de la sĂ©rie « La science et ses mauvaises consciences » , qui fait partie de l’Ă©mission Avec philosophie sur France Culture. J’ai dĂ©cidĂ© de tĂ©lĂ©charger toute la sĂ©rie, et en Ă©coutant la première Ă©pisode, j’ai entendu une des interlocutrices se servir de la phrase « en prĂ©sentiel », après quoi elle s’est excusĂ©e pour avoir prononcĂ© un anglicisme.
Cela m’a gĂŞnĂ© un peu, car ça faisait quelques jours que je travaillais sur une version de mon CV en français, une partie importante de mes efforts d’avoir un site web plus ou moins bilingue. Comme je suis professeur, l’enseignement fait Ă©videmment partie de mon CV. Je fais beaucoup d’enseignement en ligne, et je m’Ă©tais donc servi de la phrase « en prĂ©sentiel » pour distinguer les autres cours qui se dĂ©roulent dans des salles de classe. Est-ce que j’avais fait une faute ?
đź”— linkblog: The Em Dash Responds to the AI Allegations
As a committed em dash user, this has been bugging me since I heard about it.
🔗 linkblog: 'Arbres', 'noix', 'tout le monde sait'... Ce vocabulaire détourné par des internautes pour diffuser des idées d'extrême droite sur les réseaux sociaux
Le dĂ©cryptage des communautĂ©s en ligne a toujour Ă©tĂ© un genre important des recherches, mais c’est surtout important dans des cas comme celui-ci.