Below are posts associated with the “AGI” tag.
what is the correct monkey paw threshold?
One of the great “be careful what you wish for” stories is The Monkey’s Paw in which a family receives a magic item that grants wishes but discovers to their horror that all the wishes are granted in terrible, horrible ways. I can’t remember when I last read the story (though I’m confident I have—maybe in high school?), but monkey paw has stuck in my brain as the metaphor for this idea that wishes can go terribly, terribly wrong, so you really ought to think them through.
🔗 linkblog: What is AGI? Nobody agrees, and it’s tearing Microsoft and OpenAI apart.
Karen Hao’s Empire of AI really emphasized for me how much stock is being put in AGI—especially as a motivator for AI companies. I am fine wirh concepts being hard to define, but I do think things get tricky when you can’t articulate how you’ll know when you’ve met the goal that serves as your raison d’être.