Below are posts associated with the “content moderation” tag.
🔗 linkblog: Inside 4chan’s Top-Secret Moderation Machine | WIRED'
A good glimpse at content moderation, and why it’s important to do it correctly.
🔗 linkblog: Moderator Mayhem: A Mobile Game To See How Well YOU Can Handle Content Moderation | Techdirt'
This is a neat game that shows how difficult content moderation is. Excited to have my content management students play it in the Fall.
🔗 linkblog: Spotify ejects thousands of AI-made songs in purge of fake streams | Ars Technica'
Content moderation is hard, and it’s especially hard at scale. Because AI makes doing things at scale easier, it necessarily makes content moderation harder.
🔗 linkblog: Twitter Suspends Reporter For Reporting On Twitter Hack, Using Same Policy Old Twitter Used To Block NY Post Hunter Biden Story | Techdirt'
I’m tired of reading Twitter news, but I’m professionally obligated to do so, no matter how dumb it gets.
🔗 linkblog: Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'
This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem.
🔗 linkblog: OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit'
I don’t know enough about OpenAI to evaluate these concerns, but I think these questions are important. The power of AI means that the companies that control them are also in a position of power, and it’s important that we treat them critically. That said, while I do think making LLM code open source is probably better in the aggregate, it isn’t without concerning drawbacks: The minute it was released under an open license, I’m sure Gab’s Andrew Torba would be considering how to make a homebrew version that can’t be content moderated.
🔗 linkblog: As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge'
Content moderation is hard, and moderating AI content definitely seems harder to me. However, so long as OpenAI has control over ChatGPT (and benefits from others’ use of it), I do think it has a responsibility to shape what it can produce. That said, there remains a deeper, legitimate question about how much influence a single company should have over LLM output.
🔗 linkblog: Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word'
Of all the dumb responses to perfectly legitimate content moderation, this is perhaps the dumbest.
🔗 linkblog: OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time'
Looks like the job of AI training is as awful as the job of content moderation.
🔗 linkblog: As Elon Fires More Trust & Safety Staff, Twitter’s Moderation Efforts Fall Apart | Techdirt'
Repeat after me: Content moderation is a good thing.
🔗 linkblog: Elon Tries (Badly) To Defend The Banning Of Journalists As Twitter Starts Blocking Links & Mentions Of Mastodon | Techdirt'
I’ve posted a bunch of articles about this already, but Masnick’s take is super helpful.
🔗 linkblog: Elon Musk Is Taking Aim at Journalists. I’m One of Them.'
Free speech is genuinely important, but it’s hard to take the ideal seriously when its advocates twist it to mean something specific and self-serving.
🔗 linkblog: Elon’s Commitment To Free Speech Rapidly Replaced By His Commitment To Blatant Hypocrisy: Bans The JoinMastodon Account | Techdirt'
Musk is getting pettier and more self-centered.
🔗 linkblog: Twitter ditches Trust and Safety Council as Musk tweets fuel harassment | Ars Technica'
I think this headline captures one of the worst parts of all of this: Musk isn’t just dismissing concerns about behavior, he’s fueling that behavior.
🔗 linkblog: Before Musk Riled Everyone Up With Misleading Twitter Files About ‘Shadowbanning,’ Musk Used The Tool To Hide Account Tracking His Plane | Techdirt'
If I could pick one story to demonstrate that Musk’s Twitter tenure has been blundering and inconsistent…
🔗 linkblog: Elon Admits His Content Moderation Council Was Always A Sham To Keep Advertisers On The Site | Techdirt'
I’m glad I began reading Techdirt before this whole mess started… Masnick’s persective has been a helpful guide.
🔗 linkblog: Elon Musk proposes letting nearly everyone Twitter banned back on the site - The Verge'
Is he serious? Does he really think this is a good idea? Also, I love the increasing sass that The Verge and other outlets are putting into their comments about Twitter no longer having a communications team to respond to requests for comment.
🔗 linkblog: Elon Musk tries to blame ‘activists’ for his Twitter moderation council lie - The Verge'
This seems petty, immature, and misguided.
🔗 linkblog: Elon Musk begins reinstating banned Twitter accounts, starting with Jordan Peterson and the Babylon Bee - The Verge'
Oh good, so on top of the unexpected chaos, the expected chaos is also still happening.
🔗 linkblog: Elon Musk’s first Twitter moderation change calls for permanent bans on impersonators - The Verge'
They’re so obvious as to almost not be worth pointing out, but two points: First, this is why making verification a paid feature is dumb; and second, penalizing parody because your business model is dumb is not what free speech absolutism looks like.
🔗 linkblog: Antisemitic campaign tries to capitalize on Elon Musk’s Twitter takeover. - The New York Times'
Content moderation is a good thing, and not all viewpoints deserve a seat at a table.
🔗 linkblog: Elon Musk’s First Move Is To Fire The Person Most Responsible For Twitter’s Strong Free Speech Stance | Techdirt'
Interesting read here from Masnick. I’m not familiar with everything he writes about here, but I always appreciate his perspective.
🔗 linkblog: https://www.techdirt.com/2022/09/26/subreddit-discriminates-against-anyone-who-doesnt-call-texas-gov'
This is juvenile enough that I feel guilty finding it funny, but it’s a good demonstration of the problems with this backlash against content moderation.
🔗 linkblog: The Most Famous Blunder Of Content Moderation: Do NOT Quote The Princess Bride | Techdirt'
Great movie, great example of the difficulty of content moderation.
🔗 linkblog: Texas has teed up a Supreme Court fight for the future of the internet - The Verge'
We need to do more work to divorce free speech from content moderation. The world without content moderation would be a much worse world, and we don’t want to live in it. Sure, social media platforms are too powerful, but this is not the answer.
🔗 linkblog: Twitter Removes Florida Political Candidate Advocating Shooting Federal Agents; If DeSantis Won His Lawsuit, Twitter Would Need To Leave It Up | Techdirt'
I appreciate the way that Masnick uses examples from the news to call out how dumb some of these laws are.
🔗 linkblog: A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal. - The New York Times'
This is why the EFF and others have concerns about overreach of even clearly well intentioned content moderation. CSAM is clearly despicable, but automated content moderation can make mistakes, and consequences for those mistakes aren’t small.
🔗 linkblog: Google Maps Is Misleading Users Searching For Abortion Clinics… And The GOP Is Threatening The Company If It Fixes That | Techdirt'
Masnick makes two good points here: The GOP seems to only care about content moderation in self-serving ways, but also we should be wary of political mandates for content moderation.
🔗 linkblog: What Happened After the Digital Crackdown on Extremists — ProPublica'
Interesting perspective on what’s happening on “alternative” platforms.
🔗 linkblog: Study Says Trump’s Truth Social Is Much More Aggressive, And Much More Arbitrary, In Moderating Content | Techdirt'
Unsurprising, but still a valuable read.
🔗 linkblog: Self-Proclaimed Free Speech Platforms Are Censoring Nude Content. Here’s Why You Should Care | Electronic Frontier Foundation'
Here’s the EFF pointing out that “free speech” on these platforms means something very particular rather than a broad, deep commitment to legally-protected expression.
🔗 linkblog: TikTok resists calls to preserve Ukraine content for war crime investigations | Ars Technica'
So, here’s a case where TikTok’s Chinese ownership is actually a really big deal—though, of course, YouTube and other U.S. companies have also been quicker to moderate than to archive material that could be valuable in a similar way.
🔗 linkblog: Facebook Bans People For Simply Saying Abortion Pills Exist | Techdirt'
A terrifying teminder that content moderation can easily overreach.
🔗 linkblog: Impossibility Theorem Strikes Again: YouTube Deletes January 6th Committee Video | Techdirt'
Good example here of how content moderation can absolutely overreach. Arguments that platforms shouldn’t moderate are nonsense, but I appreciate Masnick’s emphasis on the need to be very careful about how we moderate.