Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect | WIRED
date linked: 17 August 2023
source: link to article, from wired.com
Good article on a worrying trend. It’s things like this that make me skeptical of arguments that generative AI could have real benefit when used properly. It’s not that I disagree—it’s that in the aggregate, I’m not sure the proper uses will outweigh the problems.
similar posts:
It annoys me when a journal asks a reviewer to address specific prompts; it annoys me more when I only realize this after writing my review.
I got my job largely because I can work with Twitter data, and my tenure application is built on the premise that I do good Twitter research. I probably shouldn’t take as much pleasure as I do from watching the platform fall apart right now, but I was ready to move on anyway.
In a training last week, we discussed the trend of journals’ checking manuscripts with plagiarism software. People shared examples where editors couldn’t accept perfectly good reasons for authors to reuse material unless a certain software score was also reached.
Reviewer 1 has missed the key argument and main throughline of my paper, and even though the editor says I can ignore them, it’s still making me SO MAD.
Responding to reviewer who has a specific picture in their head of what “good” edtech research “should” look like. Thus, they’re confused by things in my paper that I’m sure aren’t problems—but don’t fit that picture.
comments:
You can click on the <
button in the top-right of your browser window to read and write comments on this post with Hypothesis.