Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.
Jump in the discussion.
No email address required.
Notes -
How much of current AI work can be traced back to Yudkowsky influencing people to work on AI?
I was trying to explain to friends who the guy is, but I don't quite have a sense of the scope of his influence.
Anecdotally, I did my Masters in Machine Learning, and I think I only knew of him because my brother used to like some of his LessWrong stuff.
More options
Context Copy link
Exactly zero.
The current AI work was inspired by nerds who loved imaging, linguistics, math and gpus.
More options
Context Copy link
As an AI engineer. Almost none. No one I know in this space, coworkers, colleagues, friends, and myself ever heard of this guy before running into Rationalists/Lesswrong spaces and that was after we/I decided to pursue a career in ML/AI. The ones besides me that have heard of him, have an overall negative opinion of him. Several consider him a loudmouth idiot.
What do you make of this tweet from Altman, then?
To borrow a quote on this subject from my boss: "Silicon Valley Brain-rot". I wouldn't go that far but the sentiment exists.
I don't know what it is about the Bay but I can only hypothesize that for some reason when you stick a bunch of uber-nerdy, neurodivergent, high openness and high neuroticism people in an environment. Shake it up a bit, eventually the most neurotic, nerdy, and neurodivergent rise to the top. Big Yud has essentially L. R. Hubbard-ed himself into the leader of a cult that goes catatonic over the wildest sci-fi shit.
My analysis on that tweet is such:
I'm somewhere between less-quokka and cynic. But take what I say with a grain of salt. I don't live in the bay, I work in defense, and years back I decided I'd rather take the quiet and stable life instead of gunning for a job at OpenAI/Deepmind/FAIR, where I could make the most impact on AI
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Doesn’t this guy believe AI will likely kill us all? Why is he influencing people to work on AI?
Intentionally, because of his belief (at one point, at least; he's gotten much more pessimistic lately) that the least-bad way to mitigate the dangers of "Unfriendly AI" is to first develop "Friendly AI", something that also has superhuman intellectual power but that has values which have been painstakingly "aligned" with humanity's. ... I originally wrote "best way", but that has the wrong connotations; even in his less pessimistic days he recognized that "get its capabilities and values right" was a strictly harder problem than "get its capabilities right and cross your fingers", and thus the need to specifically argue that people should deliberately avoid the latter.
Unintentionally, because he doesn't get to pick and choose which of his arguments people believe and which they disbelieve. Long ago I wrote this about evangelism of existing AI researchers, but much of it applies to prospective new ones as well:
More options
Context Copy link
Yudkowsky believes:
Given these propositions, his plan is to attempt to build an aligned super-intelligent AI before anybody else can build a non-aligned super-intelligent AI -- or at least it was. Given his recent public appearances, I get the impression he's more or less given up hope.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link