Jacques Ellul and success as the only techbro metric
- 3 minutes read - 623 wordsWhen I was in grad school, a faculty member in my program told me a story about his then-quite-young son, who was having a grand old time climbing on top of the kitchen table and then leaping off of it to the floor below. (Truth be told, my memories of this conversation are fuzzy, and the son might have been engaged in some otherwise dangerous behavior.) The father tried to tell the son to stop doing this, warning: “You could have hurt yourself!” The son’s response? “But I didn’t!” Sure, the action had been potentially dangerous, but the landing had been a success, and the son didn’t see what the big deal was.
A few months ago (in a pre-DOGE era, if that makes any difference), I had a conversation in mixed political company where I made a dismissive comment about Elon Musk’s disregard for rules and regulations. A family member piped up to defend Musk, saying that sure, he had broken some rules, but SpaceX had been a success, and so he didn’t see what the big deal was.
This post is a little redundant, since I’ve already linked to the news story that inspired it, and I’ve already quoted this passage from Jacques Ellul in a previous post. The combination is too good not to revisit, though.
First, let’s hear from Nick Clegg, who went from high-ranking British politician to high-ranking Meta executive, and recently spoke out against overregulation of AI:
“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”
“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”
Now, for all of my skepticism toward AI, I need to confess that I’m not convinced that copyright is the right critical lens to use, and if folks like Mike Masnick argue that AI’s use of creative works is fair use, I feel inclined to keep my mind open to that. Of course, my preferred critical lens is one of digital labor, in which recognizing and validating others work is still important (which, by extension, would make consent important as well). Furthermore, Clegg doesn’t make an argument for fair use—he makes an argument that it is too inconvenient to get consent. You can’t have successful generative AI without massive scraping, and we clearly need generative AI to be successful, so any moral qualms about the use of authors’ creative works need to be set aside in the name of that success.
And here’s Ellul, writing in 1948 (though possibly revising in 1988):
In reality, what justifies the means today is whatever succeeds. Whatever is effective, whatever possesses in itself an “efficiency,” is justified. By applying means, a result is produced. This result is judged by these simplistic criteria of “more”: larger, faster, more precise, and so on. Simply by applying this criterion, the means is declared good. What succeeds is good, what fails is bad.
Whether you’re a kid jumping off a table, the world’s richest man ignoring the rules when he doesn’t want to, or a company that can’t be bothered to get buy-in from creators and artists, all that seems to matter is, as Ellul writes, success. It doesn’t matter if it’s dangerous, illegal, or unethical, so long as you get the results that you want.
- Jacques Ellul
- generative AI
- Elon Musk
- DOGE
- Presence in the Modern World
- Meta
- Nick Clegg
- copyright
- fair use
- Mike Masnick
- digital labor
Similar Posts:
🔗 linkblog: OpenAI's viral Studio Ghibli moment highlights AI copyright concerns | TechCrunch
thoughts on academic labor, digital labor, intellectual property, and generative AI
policy and the prophetic voice: generative AI and deepfake nudes
more on the Liahona, efficiency, and technique
🔗 linkblog: OpenAI declares AI race “over” if training on copyrighted works isn’t fair use'
Comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis. You can read more about how I use this software here.