Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
Doesn’t this guy believe AI will likely kill us all? Why is he influencing people to work on AI?
Intentionally, because of his belief (at one point, at least; he's gotten much more pessimistic lately) that the least-bad way to mitigate the dangers of "Unfriendly AI" is to first develop "Friendly AI", something that also has superhuman intellectual power but that has values which have been painstakingly "aligned" with humanity's. ... I originally wrote "best way", but that has the wrong connotations; even in his less pessimistic days he recognized that "get its capabilities and values right" was a strictly harder problem than "get its capabilities right and cross your fingers", and thus the need to specifically argue that people should deliberately avoid the latter.
Unintentionally, because he doesn't get to pick and choose which of his arguments people believe and which they disbelieve. Long ago I wrote this about evangelism of existing AI researchers, but much of it applies to prospective new ones as well:
More options
Context Copy link
Yudkowsky believes:
Given these propositions, his plan is to attempt to build an aligned super-intelligent AI before anybody else can build a non-aligned super-intelligent AI -- or at least it was. Given his recent public appearances, I get the impression he's more or less given up hope.
More options
Context Copy link
More options
Context Copy link