site banner

Small-Scale Question Sunday for May 12, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

Yudkowsky believes:

  1. Human-value-aligned AIs make up a miniscule spec of the vast space of all possible minds, and we currently have no clue how to find one.
  2. We have to get the alignment of a super human intelligence AI right on the first try or all humans will die.
  3. Coordinating enough governments to enforce a worldwide ban on threat of violence of AI development until we learn how to build friendly AIs would be nice, but it's not politically tenable in our world.
  4. The people who are currently building AIs don't appreciate how dangerous the situation we're in is and don't understand how hard it is to get an aligned super human artificial intelligence aligned on the first try.

Given these propositions, his plan is to attempt to build an aligned super-intelligent AI before anybody else can build a non-aligned super-intelligent AI -- or at least it was. Given his recent public appearances, I get the impression he's more or less given up hope.