The New York Times Got Played By A Telehealth Scam And Called It The Future Of AI
date linked: 7 April 2026
source: link to article, from techdirt.com
Masnick’s fierce critique is all the more notable for how public he is that AI is good for some things, pushing back against grumpier folks (e.g., me).
Check this paragraph out, though:
What we actually have here is a marketing operation that used AI to automate the production of deceptive advertising at a scale and speed that would have been harder to achieve otherwise. Snake oil salesmen have existed forever. What AI gave Matthew Gallagher (and, I guess, his affiliates) was the ability to crank out fake doctors, fabricated testimonials, and deepfaked before-and-after photos faster than any human team could — and to do it cheap enough that a guy with $20,000 and no morals could build it from his house. That’s the actual AI story the Times should have written.
similar posts:
🔗 linkblog: DOGE Goes Nuclear: How Trump Invited Silicon Valley Into America’s Nuclear Power Regulator
🔗 linkblog: 'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
🔗 linkblog: Grammarly says it will stop using AI to clone experts without permission
🔗 linkblog: Anthropic’s Statement To The ‘Department Of War’ Reads Like A Hostage Note Written In Business Casual
🔗 linkblog: OpenAI’s ‘Red Lines’ Are Written In The NSA’s Dictionary—Where Words Mean What The NSA Wants Them To Mean
comments:
You can click on the < button in the top-right of your browser window to read and write comments on this post with Hypothesis.