Jacques Ellul's technique and generative AI
- 4 minutes read - 835 wordsThroughout my career, I’ve been a data-first researcher, and theory has always been one of my weak areas. This is not to say that I dismiss the importance of theory: I appreciate danah boyd and Kate Crawford’s critique of Chris Anderson’s “the numbers speak for themselves” in their 2012 paper Critical Questions for Big Data as much as I appreciate Catherine D’Ignazio and Lauren Klein’s similar critique in their book Data Feminism. It’s just that while I agree that theory is important, I’ve never been well-versed in it—except for the loose theoretical framework of sociocultural learning, multiple literacies, and social communities and spaces that I bring to much of my work (even that work that has gone beyond educational technology research.
As I consider where I’d like my research to go in the future, I’ve considered the need to get more grounding in theory—and in some critical, foundational theory that can provide some new perspectives on how I think about the relationship between technology and society (or at least social groups within that society). Over the last year or so, I’ve become interested in Jacques Ellul’s work as a possible foundation in this area. To be honest, I came by Ellul by way of other personal interests, and I’ve figured that as long as I’m reading him on those subjects, I might as well also read up on his work that is more professionally relevant, too. Just this week, I picked up everything I could find of his at the University of Kentucky library, in the hopes that I can read up enough to maybe work his thinking into an upcoming conference proposal.
I haven’t gotten very far yet, but in the skimming that I’ve done so far on Ellul’s work on technique, I already know that his understanding of this concept is focused heavily on efficiency and on “means over ends” thinking. Here’s an excerpt from p. 19 of his The Technological Society that illustrates his point:
In fact, technique is nothing more than means and the ensemble of means…. Our civilization is first and foremost a civilization of means; in the reality of modern life, the means, it would seem, are more important than the ends. Any other assessment of the situation is mere idealism.
That last sentence doesn’t sit well with me; I think one of the reasons that I’ve never gotten into foundational, critical theories is because I’m wary of confident universalism. What little I’ve read of Ellul so far smacks of some of that “my theory explains everything and it’s impossible to imagine things any other way,” and I doubt that I’m going to fully embrace his worldview as I read his work.
Yet, if every model is wrong, some models are useful (and I think this is as true of theory as it is of statistics). The reason I want to read Ellul is because I can see the value in this kind of argument—and in using his theory as a lens through which to understand a particular situation. For example, that The Technological Society (or rather its French predecessor) was first published in 1954 does not stop Ellul’s observations about an obsession with efficiency and means over ends from helping make sense of contemporary gushing about generative AI.
I recently began subscribing to The Verge as part of an ongoing rethinking what news outlets I’m supporting in the current moment, and in the spirit of leaning in to a specific set of news sources, I’ve also begun listening to the podcasts that the website produces. This morning, I listened to the most recent episode of the Vergecast, and I was struck by hosts David Pierce and Nilay Patel’s commentary on the current state of generative AI companies. They had a lot to say (starting around the 15:00 mark) about generative AI that reminded me of Ellul’s technique. In one particular exchange, David gave voice to figures like Sam Altman and how they might capitalize on breakthroughs like what DeepSeek recently appears to have achieved, and he suggests that American generative AI companies are, indeed, more interested in efficient means than they are with ends:
David: “‘I can make the thing that I’m working on with all this money 50% better.’”
Nilay: “But what is he making?”
David: “This is what I’m saying!”
Just later in the exchange, David continues this assessment:
David: “The question continues to be ‘What are we laddering all of this up to?’”
In short, David describes companies like OpenAI as interested in making their means better and better but never quite making an argument for what the ends of the project are. I had to stop washing dishes so that I could write this all down—I was struck by how much it resembled Ellul’s thinking on technique.
Who knows if these observations will continue or if I’ll end up adopting Ellul’s theory for future projects? At the very least, this confluence of podcast and theory gave me enough reason to keep exploring Ellul’s writing.
- macro
- Work
- Jacques Ellul
- technique
- The Technological Society
- La technique ou l'enjeu du siècle
- theory
- data feminism
- Catherine D'Ignazio
- Lauren Klein
- danah boyd
- Kate Crawford
- Chris Anderson
- generative AI
- The Verge
- Vergecast
- Nilay Patel
- David Pierce
Similar Posts:
bad faith uses of scientific 'rigor'
ClassDojo and 'data as oil'
trapped between generative AI and student surveillance
assessment as proof of learning or as learning itself?
do you want to be good or to be optimized?
Comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis. You can read more about how I use this software here.