🔗 linkblog: my thoughts on 'Zoom says its new AI tools aren’t stealing ownership of your content - The Verge'

- kudos:

Zoom’s responses to this are meaningless, empty corporate speak. I’m not concerned about owning my content, I’m concerned about others using it while affirming my ownership. And yes, I “consent” to it in the sense that I use Zoom, but that is meaningless consent and Zoom knows it. What a garbage response. link to ‘Zoom says its new AI tools aren’t stealing ownership of your content - The Verge’

🔗 linkblog: my thoughts on 'Pluralistic: The surprising truth about data-driven dictatorships (26 July 2023) – Pluralistic: Daily links from Cory Doctorow'

- kudos:

Interesting stuff from Doctorow. If I can, I want to work it into my data science textbook for next semester. link to ‘Pluralistic: The surprising truth about data-driven dictatorships (26 July 2023) – Pluralistic: Daily links from Cory Doctorow’

- kudos:

Is there any way to complete a CAPTCHA without providing free labor for ML/AI developers? Makes me angrier every time I have to do it.

- kudos:

I’ve seen jokes about the supposed irony of having to fill out a CAPTCHA to use ChatGPT, but it’s actually pretty consistent: The purpose of CAPTCHA is also to mine the fruits of human labor to train ML/AI that can replace human labor.

🔗 linkblog: my thoughts on 'Too much trust in machine translation could have deadly consequences.'

- kudos:

This article provides good examples of how the efficacy and efficiency of a given technology is often less important than deeper questions of reliance and roles. link to ‘Too much trust in machine translation could have deadly consequences.’

🔗 linkblog: just read 'Now that machines can learn, can they unlearn? | Ars Technica'

- kudos:

Gotta admit that I’d never thought about what we should do about algorithms trained on data that’s subject to a deletion request. Interesting article. link to ‘Now that machines can learn, can they unlearn? | Ars Technica’