Below are posts associated with the “CSAM” tag.
🔗 linkblog: Grok assumes users seeking images of underage girls have “good intent”
Depressing read with interesting details about why Grok is bad at this.
🔗 linkblog: Inside the Telegram Channel Jailbreaking Grok Over and Over Again
Oof, this line:
what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators
🔗 linkblog: No, Grok can’t really “apologize” for posting non-consensual sexual images
Bookmarking because this is an important point.
🔗 linkblog: A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet
Surely this is a reasonable price to pay for the Nazi-praising Grok to “discover new physics” within the next year, as Elon promised last night.
This kind of thing is why I hate “the genie is out of the bottle” arguments. I can’t help but hear them as “yes, people are going to create more CSAM, but all we can do is instead teach people to use these tools more responsibly.”
🔗 linkblog: As Elon Fires More Trust & Safety Staff, Twitter’s Moderation Efforts Fall Apart | Techdirt'
Repeat after me: Content moderation is a good thing.
🔗 linkblog: A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal. - The New York Times'
This is why the EFF and others have concerns about overreach of even clearly well intentioned content moderation. CSAM is clearly despicable, but automated content moderation can make mistakes, and consequences for those mistakes aren’t small.