how I'm talking about generative AI in my content management class
- 4 minutes read - 801 words - kudos:Fall 2023 will mark my fifth time teaching my department’s class on Content Management Systems. I have really loved taking on this class and making it my own over the past several years. It’s also been fun to see how teaching the class has seeped into the rest of my life: It’s a “cannot unsee” situation (in a good way!) where the concepts I teach work themselves into everyday encounters with the news, my own websites, and other things around the internet.
I rely heavily on Deane Barker—especially his “Flying Squirrel” book—for providing a framework for content management. I especially appreciate Barker’s online glossary of content management terms, which I introduce early on in the semester and rely on throughout the course. I think having a shared vocabulary is important in any class, and Barker’s glossary is helpful for that purpose. That said, I’m never quite sure if my students get why having a shared vocabulary is important. I spend some time trying to explain that emphasis on the first day of class, but I’m often left with the nagging feeling that they don’t get it.
A switch flipped for me last week, though. I’ve been thinking about how the advent of generative AI into the public consciousness is necessarily going to have an impact on how I teach this class next semester. In particular, the fact that I can expect all of my students to know something about ChatGPT and its ilk is going to lead to interesting discussions when we discuss some of the entries in Barker’s content management glossary. Consider, for example, how Barker defines content (one of the very first words that we discuss):
Information which is (1) created by editorial process and (2) intended for consumption by a human audience.
These two characteristics differentiate content from other types of data which might be created through more [derivative] processes (swiping a credit card on a terminal, for instance), and intended for other types of audiences (machine processing, or instance) or usages.
Weeks before the semester starts, I already know that the day I teach about content, I’ll be asking about ChatGPT. For years, I’ve already been inviting students to probe the fuzzy boundaries of this definition to figure out what is content and what is not, but generative AI is going to give us a relevant example the likes of which I’ve never had for that early-in-the-semester lesson. Not only that, but generative AI also gives us a reason to talk about why this definition of content matters.
It turns out that the editorial process is an important one, and that there’s value in distinguishing information that emerges from that process (i.e., content) from information that doesn’t (e.g., an unedited ChatGPT response to a prompt). As I’ve previously written about, I was surprised last week to read a poor quality article on the io9 blog that purported to provide “A Chronological List of Star Movies & TV Shows” but got some very, very basic facts about the Star Wars universe very, very wrong. I usually enjoy reading io9, so I chalked the whole thing up to a novice writer cutting corners or having a bad day.
I should have checked the byline: As I learned later from The Verge, this was the result of io9’s parent company experimenting with using generative AI to create content for its outlets. James Whitbrook, the editor of io9, tweeted that he
was informed approximately 10 minutes beforehand, and no one at io9 played a part in its editing or publication.
In other words, no editorial process. Therefore, not content. And in this case, the not-content was (in the words of a statement Whitbrook sent to the parent company):
embarrassing, unpublishable, disrespectful of both the audience and the people who work here, and a blow to our authority and integrity.
I think I’m going to feel more confident talking about content and editorial process this upcoming semester because generative AI provides clear examples of why these concepts and distinctions are important. I don’t yet know how I’ll approach the use of generative AI in class (I doubt I’ll encourage it, I may require it for a targeted assignment, and I’ll probably allow it so long as it’s used transparently and fed into a human editorial process), but I’m glad to have some relevant examples to draw from as I teach this semester.
This is all I have time to write today, but this post deserves a sequel, and I’ll try to get to one soon. As a preview: One thing that bothers me about generative AI is the way that it messes with or conceals all kinds of processes, not just an editorial process. I think that has broader implications for teaching and learning that I’d like to write more about soon.
Similar Posts:
Slowly realizing that I have no choice but to make generative AI one of the themes of my content management class in the fall.
a small victory for not policing students
end-of-semester thoughts on hating grading
Kiddo is coming with me to class this afternoon, which is fun—but complicated by the fact that my lecture today is the most controversial and ‘adult’ of the semester for this class. Still, maybe a kid will have important insight on controversies surrounding content moderation?
I feel like I am constantly fine-tuning how I do assessments in my classes. I want to trust students and avoid policing them, but I’m frustrated when they respond to this approach by acting like it exempts them from attending class and participating.
Comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis. You can read more about how I use this software here.
Any Webmentions from Micro.blog will also be displayed below: