The linked post seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like
-
selection effects at the level of which arguments are discovered and distributed
-
community epistemic problems, and
-
increased uncertainty due to chains of reasoning with imperfect concepts
as real and important.
I'd be curious to get perspectives form the people of the Motte, e.g., telling me that I'm the crazy one & so on.
Regards,
Nuño.
Jump in the discussion.
No email address required.
Notes -
low effort comment:
a lot of that is sort of true, and yud and miri and ai xrisk are wrong about a bunch of things imo, but none of that makes 'ai / technology is going to irreversably transform human life and the general direction of agency / action on this planet' any less true ... and even if that doesn't involve everyone dying ("x-risk") that still seems very significant. this is true because vaguely evolution and capability, ai is powerful, people want to develop and use it, and it'll get more capable and powerful, same way humans did. So "therefore, AI is less important" is not a good takeaway.
I don't necessarily disagree. In particular, I think that from the considerations I mention, we can conclude that the specifics of how the x-risk would develop are still up in the air, and that his is somewhat valuable info.
More options
Context Copy link
More options
Context Copy link