do you want to be good or to be optimized?
- 3 minutes read - 609 words - kudos:This Saturday Morning Breakfast Cereal comic from yesterday spoke to me at a deep level:
My first thoughts went to generative AI, an area in which I feel like a fetishization of optimization is crowding out really important questions of what is good. As I put it in a university survey earlier today, there are undeniable benefits to the use of AI tools, but there are important questions as to who benefits. If my department started to use generative AI as a note-taking tool in faculty meetings (the specific focus of this survey), we would probably benefit from it!
However, in the aggregate, large corporations are going to benefit more from our use than we will. To use an Amazon example, lots of people benefit superficially from two-day shipping, but only Jeff Bezos benefits enough to take himself to space. Furthermore, the millions of people who unwittingly contributed labor to these AI tools (or got paid lousy rates to wittingly contribute the less-sexy kinds of labor to these tools) don’t really stand to benefit at all. The more that I refine my grumpiness about generative AI, the more it comes down to this kind of complaint: a hyper-focus on the (genuine!) value of AI in terms of efficiency and time-saving distracts us from asking questions about what a “good” society looks like and whether generative AI helps or hinders that vision.
Even if that’s the first thing that comes to mind, though, I’m struck by how the comic speaks to something deeper that’s been nagging at me about research for most of my career. One thing that I appreciate about having been trained in education research is that it sits at the intersection of so many different traditions of research. Thanks to my formal and informal research training, I’ve gotten messages about how education research should try to emulate the hard sciences and other messages about how it should try to emulate the humanities (and yet other messages covering every other point on that spectrum). While I have respect for all (or at least most) of these positions, I’ve also come away from these discussions convinced that there’s kind of a hidden darkness to when social science tries to explain the world so that we can manipulate it.
Don’t get me wrong—especially in the world of education, there are undoubtedly pure motives behind a lot of these manipulations. How can we argue against helping students learn better? Against reducing inequities in student outcomes across particular outcomes? Against trying to find the tools and techniques that are going to have the most impact in the world? Yet, just like with generative AI, I sometimes find that a focus on optimization crowds out deeper questions about what is good. I am sympathetic to efforts to “improve learning,” but I will always be grumpy about them because they too often fail to ask what students should be learning and why. I am glad that there are more efforts to involve girls and minoritized populations in computer science education, but I’m frustrated by the way that this cause does not question an underlying assumption that the purpose of education is to advance the U.S. economy—and that helping the tech sector get bigger and more powerful advances the U.S. economy, then schools should help do it.
Optimization can be a good thing! In my personal routines, I like to incorporate some automation wherever I can to save myself some time. However, there are too many cases in which universities, faculty, and others assume that optimization is itself a worthy cause without pausing to ask questions about the good. We need more of those pauses.
- macro
- Work
- research
- generative AI
- ethics
- utilitarianism
- digital labor
- Amazon
- Jeff Bezos
- STEM education
- STEM
- humanities
- research paradigms
Similar Posts:
I think what bothers me about “improving learning” approaches to educational technology is that it tends to prioritize utilitarianism at the expense of everything else. Ethical concerns about AI don’t matter if grades go up, what students should learn about is largely shoved aside, and so forth.
rediscovering some comments on computational thinking
new publication: an autoethnography on French, data science, and paradigm change
I am, technically speaking, a STEM educator, but the reason I get so cranky about STEM hype is that these disciplines cannot on their own address the problems I’m most worried about right now.
🔗 linkblog: my thoughts on 'Amazon Turkers Who Train AI Say They’re Locked Out of Their Work and Money'
Comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis. You can read more about how I use this software here.
Any Webmentions from Micro.blog will also be displayed below: