AI-generated research papers are overwhelming peer review
date linked: 15 May 2026
source: link to article, from theverge.com
Here’s a gift link. Jacques Ellul argued that you can’t separate the good aspects of technique from the bad. In that context, this paragraph stands out:
Optimists about generative AI have high hopes for its ability to produce future scientific breakthroughs — accelerating discovery, eliminating most types of cancer — but the technology is currently undermining one of the pillars of scientific research, inundating editors and reviewers with an endless stream of papers. Paradoxically, the better the technology gets at producing competent papers, the worse the crisis becomes.
Even if we don’t take the super broad view that Ellul argues for, though, I think there’s still something to be said about the scale of generative AI being an issue, even if it can be used for good and bad purposes. Even if we were generous and assumed that a majority of scientific uses of generative AI were positive, two things are worth noting: First, the scale at which the minority of negative uses are carried out would still be a hell of an effect. Second, we would still struggle to respond to the positive uses at scale.
similar posts:
🔗 linkblog: Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect | WIRED'
I love a genuinely good experience as a peer reviewer. The current manuscript I’m reviewing is one that I’m very well suited to review (which does not happen enough), and the authors have clearly improved the paper since the last round. It’s nice when things work like they’re supposed to.
TFW a special issue deadline gets extended by a month after you dropped everything for a week to meet the original deadline. Alas.
404 Media podcast on generative AI and epistemology
why I think labor, not copyright, is the foundational problem with AI scrapers
comments:
You can click on the < button in the top-right of your browser window to read and write comments on this post with Hypothesis.