Below you will find pages that utilize the taxonomy term “content moderation”
🔗 linkblog: my thoughts on 'Opinion | I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online. - The New York Times'
Roth’s perspective is valuable here. Scary stuff. [link to ‘Opinion | I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online. - The New York Times’](https://www.nytimes.com/2023/09/18/opinion/trump-elon-musk-twitter.html?unlocked_article_code=4tdIbFuKLW42ISeaU4acN26WTieKQcsLEoCyhJt1DC8dcAq9yCnJjyrbKLCEWm2hVWmWh-x94MKiw-I_OrqJ8JIYpDsdvQ4BFioWZ_RXCQ4ftJfFamVymL4ZnoK5RUQIhDdY-ZuJck3JBMeNXn5VYxEZ-tp8__DgJ_29osLV2tNCx4SZkrQrNtAyYPdzMK4asGiGrshlttyZF4arTjYH7ObwQo2-GSiVT3z3QovPSQ8Q4L9ggP7frVv1zKmIi4yukMwCGcqmRYnUy8pmnGPw0wWV3c9FMTUKuc6VM7kGy9gMnz_OUsQCiX8LR3v5Ls40VVkp1tb_c7PD4BiQ6lFP2Aw
🔗 linkblog: my thoughts on 'Elon Musk, Once Again, Tries To Throttle Links To Sites He Dislikes | Techdirt'
I’ve instinctively never liked t.co links, and this demonstrates what the problem with them are. link to ‘Elon Musk, Once Again, Tries To Throttle Links To Sites He Dislikes | Techdirt’
🔗 linkblog: my thoughts on 'OpenAI wants GPT-4 to solve the content moderation dilemma - The Verge'
Look, if an automated process could save human moderators from the awful work they have to do, I’d be all for it. I’m unconvinced that GPT-4 could do it, though. link to ‘OpenAI wants GPT-4 to solve the content moderation dilemma - The Verge’
🔗 linkblog: my thoughts on 'AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian'
So many important points in this piece. link to ‘AI hysteria is a distraction: algorithms already sow disinformation in Africa | Odanga Madung | The Guardian’
🔗 linkblog: my thoughts on 'Cleaning Up ChatGPT’s Language Takes Heavy Toll on Human Workers - WSJ'
Everyone excited about generative AI needs to account for this kind of thing. We don’t pay enough attention to digital labor and the dehumanizing aspects of content moderation. link to ‘Cleaning Up ChatGPT’s Language Takes Heavy Toll on Human Workers - WSJ’
🔗 linkblog: my thoughts on 'A Leaked Memo Shows TikTok Knows It Has a Labor Problem | WIRED'
I think this is a much bigger deal than any purported security risk. link to ‘A Leaked Memo Shows TikTok Knows It Has a Labor Problem | WIRED’
🔗 linkblog: my thoughts on 'ChatGPT users drop for the first time as people turn to uncensored chatbots | Ars Technica'
I get that it’s straightforward language that everyone will get, but I think “uncensored” is the wrong word here. Content moderation is not (necessarily) censorship, and content moderation is good and helpful for tools like generative AI. link to ‘ChatGPT users drop for the first time as people turn to uncensored chatbots | Ars Technica’
🔗 linkblog: my thoughts on 'Internal Twitter Video Reveals Twitter Bent Over Backwards To Protect Trump And Pro-Trump Insurrectionists | Techdirt'
Helpful summary by Masnick; bookmarking for later. link to ‘Internal Twitter Video Reveals Twitter Bent Over Backwards To Protect Trump And Pro-Trump Insurrectionists | Techdirt’
🔗 linkblog: my thoughts on 'Inside 4chan’s Top-Secret Moderation Machine | WIRED'
A good glimpse at content moderation, and why it’s important to do it correctly. link to ‘Inside 4chan’s Top-Secret Moderation Machine | WIRED’
🔗 linkblog: my thoughts on 'Moderator Mayhem: A Mobile Game To See How Well YOU Can Handle Content Moderation | Techdirt'
This is a neat game that shows how difficult content moderation is. Excited to have my content management students play it in the Fall. link to ‘Moderator Mayhem: A Mobile Game To See How Well YOU Can Handle Content Moderation | Techdirt’
🔗 linkblog: my thoughts on 'Spotify ejects thousands of AI-made songs in purge of fake streams | Ars Technica'
Content moderation is hard, and it’s especially hard at scale. Because AI makes doing things at scale easier, it necessarily makes content moderation harder. link to ‘Spotify ejects thousands of AI-made songs in purge of fake streams | Ars Technica’
🔗 linkblog: my thoughts on 'Twitter Suspends Reporter For Reporting On Twitter Hack, Using Same Policy Old Twitter Used To Block NY Post Hunter Biden Story | Techdirt'
I’m tired of reading Twitter news, but I’m professionally obligated to do so, no matter how dumb it gets. link to ‘Twitter Suspends Reporter For Reporting On Twitter Hack, Using Same Policy Old Twitter Used To Block NY Post Hunter Biden Story | Techdirt’
🔗 linkblog: my thoughts on 'Elon Musk Is Reportedly Building 'Based AI' Because ChatGPT Is Too Woke'
This is dumb and worrying. The CEO of Gab has been promising to develop “based AI,” but he’s a bit player. Musk has the resources and influence to make this a bigger problem. link to ‘Elon Musk Is Reportedly Building ‘Based AI’ Because ChatGPT Is Too Woke’
🔗 linkblog: my thoughts on 'OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit'
I don’t know enough about OpenAI to evaluate these concerns, but I think these questions are important. The power of AI means that the companies that control them are also in a position of power, and it’s important that we treat them critically. That said, while I do think making LLM code open source is probably better in the aggregate, it isn’t without concerning drawbacks: The minute it was released under an open license, I’m sure Gab’s Andrew Torba would be considering how to make a homebrew version that can’t be content moderated.
🔗 linkblog: my thoughts on 'As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge'
Content moderation is hard, and moderating AI content definitely seems harder to me. However, so long as OpenAI has control over ChatGPT (and benefits from others’ use of it), I do think it has a responsibility to shape what it can produce. That said, there remains a deeper, legitimate question about how much influence a single company should have over LLM output. link to ‘As conservatives criticize ‘woke AI,’ here are ChatGPT’s rules for answering culture war queries - The Verge’
🔗 linkblog: my thoughts on 'Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word'
Of all the dumb responses to perfectly legitimate content moderation, this is perhaps the dumbest. link to ‘Conservatives Are Obsessed With Getting ChatGPT to Say the N-Word’
🔗 linkblog: my thoughts on 'OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time'
Looks like the job of AI training is as awful as the job of content moderation. link to ‘OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time’
🔗 linkblog: my thoughts on 'As Elon Fires More Trust & Safety Staff, Twitter’s Moderation Efforts Fall Apart | Techdirt'
Repeat after me: Content moderation is a good thing. link to ‘As Elon Fires More Trust & Safety Staff, Twitter’s Moderation Efforts Fall Apart | Techdirt’
🔗 linkblog: my thoughts on 'Elon Tries (Badly) To Defend The Banning Of Journalists As Twitter Starts Blocking Links & Mentions Of Mastodon | Techdirt'
I’ve posted a bunch of articles about this already, but Masnick’s take is super helpful.
link to ‘Elon Tries (Badly) To Defend The Banning Of Journalists As Twitter Starts Blocking Links & Mentions Of Mastodon | Techdirt’
🔗 linkblog: my thoughts on 'Elon Musk Is Taking Aim at Journalists. I’m One of Them.'
Free speech is genuinely important, but it’s hard to take the ideal seriously when its advocates twist it to mean something specific and self-serving.
link to ‘Elon Musk Is Taking Aim at Journalists. I’m One of Them.’
🔗 linkblog: my thoughts on 'Elon’s Commitment To Free Speech Rapidly Replaced By His Commitment To Blatant Hypocrisy: Bans The JoinMastodon Account | Techdirt'
Musk is getting pettier and more self-centered.
link to ‘Elon’s Commitment To Free Speech Rapidly Replaced By His Commitment To Blatant Hypocrisy: Bans The JoinMastodon Account | Techdirt’
🔗 linkblog: my thoughts on 'Twitter ditches Trust and Safety Council as Musk tweets fuel harassment | Ars Technica'
I think this headline captures one of the worst parts of all of this: Musk isn’t just dismissing concerns about behavior, he’s fueling that behavior.
link to ‘Twitter ditches Trust and Safety Council as Musk tweets fuel harassment | Ars Technica’
🔗 linkblog: my thoughts on 'Before Musk Riled Everyone Up With Misleading Twitter Files About ‘Shadowbanning,’ Musk Used The Tool To Hide Account Tracking His Plane | Techdirt'
If I could pick one story to demonstrate that Musk’s Twitter tenure has been blundering and inconsistent…
link to ‘Before Musk Riled Everyone Up With Misleading Twitter Files About ‘Shadowbanning,’ Musk Used The Tool To Hide Account Tracking His Plane | Techdirt’
🔗 linkblog: my thoughts on 'Elon Admits His Content Moderation Council Was Always A Sham To Keep Advertisers On The Site | Techdirt'
I’m glad I began reading Techdirt before this whole mess started… Masnick’s persective has been a helpful guide.
link to ‘Elon Admits His Content Moderation Council Was Always A Sham To Keep Advertisers On The Site | Techdirt’
🔗 linkblog: my thoughts on 'Elon Musk proposes letting nearly everyone Twitter banned back on the site - The Verge'
Is he serious? Does he really think this is a good idea? Also, I love the increasing sass that The Verge and other outlets are putting into their comments about Twitter no longer having a communications team to respond to requests for comment.
link to ‘Elon Musk proposes letting nearly everyone Twitter banned back on the site - The Verge’
🔗 linkblog: my thoughts on 'Elon Musk tries to blame ‘activists’ for his Twitter moderation council lie - The Verge'
This seems petty, immature, and misguided.
link to ‘Elon Musk tries to blame ‘activists’ for his Twitter moderation council lie - The Verge’
🔗 linkblog: my thoughts on 'Elon Musk begins reinstating banned Twitter accounts, starting with Jordan Peterson and the Babylon Bee - The Verge'
Oh good, so on top of the unexpected chaos, the expected chaos is also still happening.
link to ‘Elon Musk begins reinstating banned Twitter accounts, starting with Jordan Peterson and the Babylon Bee - The Verge’
🔗 linkblog: my thoughts on 'Elon Musk’s first Twitter moderation change calls for permanent bans on impersonators - The Verge'
They’re so obvious as to almost not be worth pointing out, but two points: First, this is why making verification a paid feature is dumb; and second, penalizing parody because your business model is dumb is not what free speech absolutism looks like.
link to ‘Elon Musk’s first Twitter moderation change calls for permanent bans on impersonators - The Verge’
🔗 linkblog: my thoughts on 'Antisemitic campaign tries to capitalize on Elon Musk’s Twitter takeover. - The New York Times'
Content moderation is a good thing, and not all viewpoints deserve a seat at a table.
link to ‘Antisemitic campaign tries to capitalize on Elon Musk’s Twitter takeover. - The New York Times’
🔗 linkblog: my thoughts on 'Elon Musk’s First Move Is To Fire The Person Most Responsible For Twitter’s Strong Free Speech Stance | Techdirt'
Interesting read here from Masnick. I’m not familiar with everything he writes about here, but I always appreciate his perspective.
link to ‘Elon Musk’s First Move Is To Fire The Person Most Responsible For Twitter’s Strong Free Speech Stance | Techdirt’
🔗 linkblog: my thoughts on 'https://www.techdirt.com/2022/09/26/subreddit-discriminates-against-anyone-who-doesnt-call-texas-gov'
This is juvenile enough that I feel guilty finding it funny, but it’s a good demonstration of the problems with this backlash against content moderation.
link to ‘https://www.techdirt.com/2022/09/26/subreddit-discriminates-against-anyone-who-doesnt-call-texas-gov’
🔗 linkblog: my thoughts on 'The Most Famous Blunder Of Content Moderation: Do NOT Quote The Princess Bride | Techdirt'
Great movie, great example of the difficulty of content moderation.
link to ‘The Most Famous Blunder Of Content Moderation: Do NOT Quote The Princess Bride | Techdirt’
🔗 linkblog: my thoughts on 'Texas has teed up a Supreme Court fight for the future of the internet - The Verge'
We need to do more work to divorce free speech from content moderation. The world without content moderation would be a much worse world, and we don’t want to live in it. Sure, social media platforms are too powerful, but this is not the answer.
link to ‘Texas has teed up a Supreme Court fight for the future of the internet - The Verge’
🔗 linkblog: my thoughts on 'How YouTube’s Partnership with London’s Police Force is Censoring UK Drill Music | Electronic Frontier Foundation'
See, this is censorship.
link to ‘How YouTube’s Partnership with London’s Police Force is Censoring UK Drill Music | Electronic Frontier Foundation’
🔗 linkblog: my thoughts on 'Twitter Removes Florida Political Candidate Advocating Shooting Federal Agents; If DeSantis Won His Lawsuit, Twitter Would Need To Leave It Up | Techdirt'
I appreciate the way that Masnick uses examples from the news to call out how dumb some of these laws are.
link to ‘Twitter Removes Florida Political Candidate Advocating Shooting Federal Agents; If DeSantis Won His Lawsuit, Twitter Would Need To Leave It Up | Techdirt’
🔗 linkblog: my thoughts on 'A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal. - The New York Times'
This is why the EFF and others have concerns about overreach of even clearly well intentioned content moderation. CSAM is clearly despicable, but automated content moderation can make mistakes, and consequences for those mistakes aren’t small.
link to ‘A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal. - The New York Times’
🔗 linkblog: my thoughts on 'Google Maps Is Misleading Users Searching For Abortion Clinics… And The GOP Is Threatening The Company If It Fixes That | Techdirt'
Masnick makes two good points here: The GOP seems to only care about content moderation in self-serving ways, but also we should be wary of political mandates for content moderation.
link to ‘Google Maps Is Misleading Users Searching For Abortion Clinics… And The GOP Is Threatening The Company If It Fixes That | Techdirt’
🔗 linkblog: my thoughts on 'What Happened After the Digital Crackdown on Extremists — ProPublica'
Interesting perspective on what’s happening on “alternative” platforms.
link to ‘What Happened After the Digital Crackdown on Extremists — ProPublica’
🔗 linkblog: my thoughts on 'Study Says Trump’s Truth Social Is Much More Aggressive, And Much More Arbitrary, In Moderating Content | Techdirt'
Unsurprising, but still a valuable read.
link to ‘Study Says Trump’s Truth Social Is Much More Aggressive, And Much More Arbitrary, In Moderating Content | Techdirt’
🔗 linkblog: my thoughts on 'Self-Proclaimed Free Speech Platforms Are Censoring Nude Content. Here’s Why You Should Care | Electronic Frontier Foundation'
Here’s the EFF pointing out that “free speech” on these platforms means something very particular rather than a broad, deep commitment to legally-protected expression.
link to ‘Self-Proclaimed Free Speech Platforms Are Censoring Nude Content. Here’s Why You Should Care | Electronic Frontier Foundation’
🔗 linkblog: my thoughts on 'TikTok resists calls to preserve Ukraine content for war crime investigations | Ars Technica'
So, here’s a case where TikTok’s Chinese ownership is actually a really big deal—though, of course, YouTube and other U.S. companies have also been quicker to moderate than to archive material that could be valuable in a similar way.
link to ‘TikTok resists calls to preserve Ukraine content for war crime investigations | Ars Technica’
🔗 linkblog: my thoughts on 'Facebook Bans People For Simply Saying Abortion Pills Exist | Techdirt'
A terrifying teminder that content moderation can easily overreach.
link to ‘Facebook Bans People For Simply Saying Abortion Pills Exist | Techdirt’
🔗 linkblog: my thoughts on 'Impossibility Theorem Strikes Again: YouTube Deletes January 6th Committee Video | Techdirt'
Good example here of how content moderation can absolutely overreach. Arguments that platforms shouldn’t moderate are nonsense, but I appreciate Masnick’s emphasis on the need to be very careful about how we moderate.
link to ‘Impossibility Theorem Strikes Again: YouTube Deletes January 6th Committee Video | Techdirt’
🔗 linkblog: my thoughts on 'Trump’s ‘Free Speech’ Social Network, Truth Social, Is Banning People For Truthing The Truth About January 6 Hearings | Techdirt'
This is a peak example of what performative concerns about “free speech” boil down to.
link to ‘Trump’s ‘Free Speech’ Social Network, Truth Social, Is Banning People For Truthing The Truth About January 6 Hearings | Techdirt’
🔗 linkblog: my thoughts on 'Racist and Violent Ideas Jump From Web’s Fringes to Mainstream Sites - The New York Times'
Content moderation is a good thing, and ‘free speech’ should not be our primary concern when it comes to social media platforms.
link to ‘Racist and Violent Ideas Jump From Web’s Fringes to Mainstream Sites - The New York Times’
interview with WEKU on Buffalo shooting and social media content moderation
Last week, I was interviewed by a reporter at WEKU about social media and content moderation in the context of the horrific recent shooting in Buffalo, and I was pleased to see the interview appear on the WEKU website this morning.
I wish that the headline didn’t frame this as a question of “free speech”—and that I’d perhaps been more forceful in emphasizing that these really aren’t questions of free speech so much as content moderation.
🔗 linkblog: my thoughts on 'How the Buffalo shooting livestream went viral - The Verge'
Content moderation is (sometimes) a good thing.
link to ‘How the Buffalo shooting livestream went viral - The Verge’
🔗 linkblog: my thoughts on 'QAnon Thinks Elon Musk Is Going to Let Them Back On Twitter'
If QAnon is excited, the rest of us should be worried—though I think there is a possibility that Musk realizes just how bad his ideas re: limiting moderation are and fails to deliver.
link to ‘QAnon Thinks Elon Musk Is Going to Let Them Back On Twitter’
🔗 linkblog: my thoughts on 'Conservatives celebrate Musk’s deal to buy Twitter. - The New York Times'
Say it together now: Content moderation and free speech are different things.
link to ‘Conservatives celebrate Musk’s deal to buy Twitter. - The New York Times’
🔗 linkblog: my thoughts on 'Trump says he won’t leave Truth Social, despite Musk’s Twitter takeover - The Verge'
The quotes in here underline how often ‘free speech’ is used to mean ‘problematic right-wing talking points.’
link to ‘Trump says he won’t leave Truth Social, despite Musk’s Twitter takeover - The Verge’
🔗 linkblog: my thoughts on 'Twitter Has a New Owner. Here’s What He Should Do. | Electronic Frontier Foundation'
EFF cares about and actually understands free speech and content moderation, so their voice is especially important today.
link to ‘Twitter Has a New Owner. Here’s What He Should Do. | Electronic Frontier Foundation’
🔗 linkblog: my thoughts on 'Of ‘Algospeak’ and the Crudeness of Automated Moderation | by Clive Thompson | Apr, 2022 | OneZero'
Fascinated by this article for so many reasons. First, it’s a great example of meaningful practices in online spaces; second, it brings it back to the need for more, smaller platforms.
link to ‘Of ‘Algospeak’ and the Crudeness of Automated Moderation | by Clive Thompson | Apr, 2022 | OneZero’
🔗 linkblog: my thoughts on 'Elon Musk Demonstrates How Little He Understands About Content Moderation | Techdirt'
I have only been reading Techdirt for a short amount of time, but I increasingly appreciate Masnick’s perspectives on issues like this.
link to ‘Elon Musk Demonstrates How Little He Understands About Content Moderation | Techdirt’
🔗 linkblog: my thoughts on 'Elon Musk, After Toying With Twitter, Now Wants It All - The New York Times'
Content moderation is a necessity, and Musk’s take here is wildly irresponsible.
link to ‘Elon Musk, After Toying With Twitter, Now Wants It All - The New York Times’
🔗 linkblog: my thoughts on 'Roger Stone Claims He’s Being ‘Censored’ on Trump’s Truth Social'
All platforms moderate content, most content moderation isn’t censorship.
link to ‘Roger Stone Claims He’s Being ‘Censored’ on Trump’s Truth Social’
🔗 linkblog: my thoughts on 'Why Moderating Content Actually Does More To Support The Principles Of Free Speech | Techdirt'
Really appreciate Masnick’s perspective here—especially the point that EVERYONE believes in content moderation even if there are disagreements on how to do it. It’s irresponsible for so many (on the right) to describe moderation as censorship.
link to ‘Why Moderating Content Actually Does More To Support The Principles Of Free Speech | Techdirt’
🔗 linkblog: my thoughts on 'Rumble, the Right’s Go-To Video Site, Has Much Bigger Ambitions - The New York Times'
Glad to see reporting on Rumble, but disappointed to see uncritical repeating of claims about “free speech,” “neutrality,” and “censorship.” There are no neutral platforms, and content moderation is the real key idea here.
link to ‘Rumble, the Right’s Go-To Video Site, Has Much Bigger Ambitions - The New York Times’
🔗 linkblog: my thoughts on 'To Make Social Media Work Better, Make It Fail Better | Electronic Frontier Foundation'
This idea increasingly resonates with me.
link to ‘To Make Social Media Work Better, Make It Fail Better | Electronic Frontier Foundation’
🔗 linkblog: my thoughts on 'Performative Conservatives Are Mad That A Search Engine Wants To Downrank Disinformation | Techdirt'
I missed most of this yesterday, but Masnick sums up my thoughts so much better than I could.
link to ‘Performative Conservatives Are Mad That A Search Engine Wants To Downrank Disinformation | Techdirt’
🔗 linkblog: just finished 'Spotify CEO Daniel Ek defends Joe Rogan deal in tense company town hall - The Verge'
Even if Spotify could demonstrate it isn’t a publisher here, platforms don’t get a free pass on content. Also, podcast platforms run counter to podcasting, so Spotify’s trying to be successful there is just as troublesome as the costs it’s willing to pay to do so.
link to ‘Spotify CEO Daniel Ek defends Joe Rogan deal in tense company town hall - The Verge’
🔗 linkblog: just finished 'Devin Nunes, CEO Of Trump's TRUTH Social, Confirms That 'Free Speech' Social Media Will Be HEAVILY Moderated | Techdirt'
This inconsistency is mind boggling.
link to ‘Devin Nunes, CEO Of Trump’s TRUTH Social, Confirms That ‘Free Speech’ Social Media Will Be HEAVILY Moderated | Techdirt’
🔗 linkblog: just finished 'Banned from Facebook and Twitter, far right groups are still a presence online. : NPR'
Interesting read on a subject I expect to be following for a while.
link to ‘Banned from Facebook and Twitter, far right groups are still a presence online. : NPR’
🔗 linkblog: just finished 'Election Falsehoods Surged on Podcasts Before Capitol Riots, Researchers Find - The New York Times'
Podcasts are one of the last bastions of the open internet, but that evidently comes at a cost. So long as Apple and Spotify are trying to corner the podcast market, they should be moderating their content.
link to ‘Election Falsehoods Surged on Podcasts Before Capitol Riots, Researchers Find - The New York Times’
🔗 linkblog: just finished 'Tumblr goes overboard censoring tags on iOS to comply with Apple’s guidelines - The Verge'
There are clear cases where platforms need to be moderating more content, but let’s not forget the seemingly-well-intentioned but overreaching cases either.
link to ‘Tumblr goes overboard censoring tags on iOS to comply with Apple’s guidelines - The Verge’
🔗 linkblog: just finished 'TikTok sued by former content moderator for allegedly failing to protect her mental health - The Verge'
Content moderation is an awful job, and we shouldn’t forget the people doing it for us.
link to ‘TikTok sued by former content moderator for allegedly failing to protect her mental health - The Verge’
🔗 linkblog: just finished 'Podcast Episode: Who Should Control Online Speech? | Electronic Frontier Foundation'
Such a good conversation on such an important topic.
link to ‘Podcast Episode: Who Should Control Online Speech? | Electronic Frontier Foundation’
🔗 linkblog: just read 'What happened when Facebook asked users what content was good or bad for the world.'
Interesting read.
link to ‘What happened when Facebook asked users what content was good or bad for the world.’
🔗 linkblog: just read 'Employees pleaded with Facebook to stop letting politicians bend rules | Ars Technica'
Facebook might need more moderators, but they shouldn’t be company executives…
link to ‘Employees pleaded with Facebook to stop letting politicians bend rules | Ars Technica’
🔗 linkblog: just read 'Facebook whistleblower hearing: France Haugen finally got Republicans to stop yapping about anti-conservative bias.'
Interesting article. I’m particularly interested in the idea of focusing on algorithms rather than content.
link to ‘Facebook whistleblower hearing: France Haugen finally got Republicans to stop yapping about anti-conservative bias.’
🔗 linkblog: just read 'Political parties complained Facebook’s algorithm promoted polarization - The Verge'
What a read. Platforms don’t just host content, they manipulate that content.
link to ‘Political parties complained Facebook’s algorithm promoted polarization - The Verge’
🔗 linkblog: just read 'Secret Facebook program reportedly let celebrities avoid moderation - The Verge'
Bookmarking this for my content management class.
link to ‘Secret Facebook program reportedly let celebrities avoid moderation - The Verge’
🔗 linkblog: just read 'The Giftschrank offers a path for social media companies on content moderation transparency.'
Interesting proposal for a difficult issue.
link to ‘The Giftschrank offers a path for social media companies on content moderation transparency.’