I teach in a tech-focused program, and I think it’s reasonable to ask how we’re going to address generative AI in our curriculum, but I still resent the expectation that we must jump on this bandwagon simply because it’s there.
Similar Posts:
I do not believe in using AI detection software, but I reserve the right to be annoyed by the students whom I suspect of taking advantage of that belief.
This week has enough writing (and deadlines!) that the utilitarian appeal of ChatGPT is finally clear to me; and yet, it’s also so much clearer that I would rather do fewer things well and on my own.
Another set of proofs, another set of complaints about a copyeditor making changes to my writing in ways that distort my meaning. If I get grumpy about a human doing my writing for me, why would I ever want generative AI to do it?
College conversation about investment in GPT-type tech to support research is continuing. I think it’s… fitting that the survey being circulated is clearly using Qualtrics’s auto-suggested Likert responses—and that the responses aren’t quite right for the questions being asked.
My college is floating the idea of investing in GPT-type technology to help researchers code text data. This reminds me of my longtime belief that the distinction between “qual” and “quant” is often less important than the distinction between different research paradigms.
Comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis. You can read more about how I use this software here.
Any Webmentions from Micro.blog will also be displayed below: