🔗 linkblog: my thoughts on 'Why AI detectors think the US Constitution was written by AI | Ars Technica'

- kudos:

I don’t like generative AI, and I get grumpy about advice to accept it and work it into classes (even though I probably agree with that approach at the end of the day). For all that dislike and grumpiness, though, I feel even more strongly that AI detectors are not the way to go. This is a really interesting article. link to ‘Why AI detectors think the US Constitution was written by AI | Ars Technica’

🔗 linkblog: my thoughts on 'OpenAI Wants To Help You Figure Out If Text Was Written By OpenAI; But What Happens When It’s Wrong? | Techdirt'

- kudos:

Just because some worries about ChatGPT are, indeed, moral panics doesn’t mean that there aren’t legtimate criticisms of the technology—including from an educational perspective. I happen to agree with Masnick that schools ultimately need to roll with the punches here, but given how much we already expect of our schools and teachers, it’s reasonable to resent being punched in the first place. Masnick’s point about the error rate for detecting AI-generated text is an important one, though: I don’t think plagiarism-detecting surveillance is at all the right response.

🔗 linkblog: my thoughts on 'A CompSci Student Built an App That Can Detect ChatGPT-Generated Text'

- kudos:

See, as worried as I am about ChatGPT use in education, this actually worries me more, because it’s basically plagiarism detection, which I’m opposed to. link to ‘A CompSci Student Built an App That Can Detect ChatGPT-Generated Text’